For someone who thinks deeply about the future of technology, Dr. Dan Geer lives a surprisingly traditional life. In addition to working as a senior fellow at In-Q-Tel, the nonprofit venture arm of the CIA, Geer runs a small farm in a “pretty rural” part of Tennessee and his only phone is a landline.
When I caught up with Geer towards the end of last year, he was in the middle of tweaking his farming plans due to disruptions from COVID-19—more of a focus on farmers’ markets, less emphasis on selling to restaurants. But our conversation quickly turned to his long career in cybersecurity, and how much has changed since he entered the field.
One thing that’s clear from any discussion with Geer is that he’s something of a philosopher-technologist. He often responds to questions with questions of his own—ones that are profound and unanswerable. Are humans a failsafe or liability in cybersecurity? How will artificial intelligence change our understanding of vulnerabilities? What will happen when two quantum computers are “aimed” at one another?
Geer is quick to point out that he doesn’t have all the answers. But he did offer up advice to people just entering the field of cybersecurity, and explained why he’s optimistic about some small things but has concerns about the big picture. Our conversation below has been lightly edited for clarity:
TR: You studied electrical engineering in addition to computer science at MIT, and went on to study biostatistics at Harvard. Can you talk about how you got into cybersecurity?
DG: I stumbled into it, because that was the only way you could get into it. At the time there was no training in cybersecurity. Now there’s more training than you can shake a stick at. When I got into it, you could be a generalist, you could know pretty much everything there was to know about computer security, more or less. Now it’s utterly impossible. Young people ask me for advice, and I say choose a specialty. It’s not possible at this point to be a generalist. You can have a serial specialization—go from this, to this, to this, like moving up in a big company—but you can’t start at the top. Sure enough, all the people I know who write really good papers are studying the minutiae of one little topic. I’m not making fun of it, but the field has broadened and what people can cover has narrowed. Maybe a good metaphor is mathematics. No one can be a good mathematician and cover the whole field, because it takes you five years just to get to the frontier.
What do you do when nobody can understand it all? Is that when you give up and turn it over to the machines? Is that when you need a unified field theory, like the physicists? I’m lucky in that regard in that I got in at a time where it was plausible to know most of it, if not all of it—perhaps not things in the classified space. But now you don’t even know what you don’t know, and that’s a sea change.
What do you do when nobody can understand it all? Is that when you give up and turn it over to the machines?”
TR: Is it a discipline you would recommend to young people today?
DG: Yes, absolutely. For one, there’s a lot of job security. I don’t know that I’d recommend they go into it with the idea that the accumulation of certifications is proof that they’re getting somewhere, though.
It’s a great specialty to have along with another one. If you’re a usability person, also knowing how security works is a good thing, and there’s even the Symposium On Usable Privacy and Security, or SOUPS. If you’re designing interfaces, it helps to know how to build security into it instead of bolting it on the side later. That’s just one example. But even if you do just focus on security, you should expect to have an ongoing refreshment as part of your education. Things change, and they change often. You have to keep up, and if you like it you can think of it as an intellectual plus—there’s a constant demand for study. Some people think it’s a plus, some think it’s a minus.
TR: What cybersecurity specialities or topics do you think are important to know?
DG: That’s a good question. In my view, the definition of security bears on this. Everyone has a definition, but one I’ve settled on is that the state of security is the absence of unmitigatable surprise. Which means there will be surprises, but a state of security is one where the surprises are such that you will have a mitigation before long. If you lost the integrity of a large database, perhaps somebody encrypted it, you have a fallback. That definition immediately leads you to ask what’s the goal of security engineering, what’s the goal of trying to build something. And I think the answer is no silent failure—not no failure, because that implies a kind of perfection [that doesn’t exist].
Self testing for diminished operation is important. If you have 100 million users, it’s sort of a shame that what you have either works or it doesn’t. And if you have a big power failure, how do you turn the electricity back on? The answer is not all at once. You don’t know what will happen, and the surge will just set off another failure. So when I say no silent failure in the context of the absence of unmitigatable surprise, that’s how to think about it. This is just one guy’s opinion.
Years ago my consulting company did work for a large bank, and one of the things they had done, and this is years ago, is that they had put a lot of effort into the regulatory compliance idea that if someone left the bank—voluntarily, fired, whatever—the value of their credentials would be set to zero quickly. They were worried that such a system for quickly erasing a person could cause trouble if it went haywire, so they had a second system that watches the first, and if it looks like 100 people have been fired in the last five minutes it stops the deauthorization process, rings an alarm that goes to whoever is head of operations, and I know for a fact that it has saved their bank at least twice. You don’t want such a thing that can erase people really fast to start erasing everybody. It would do it in a matter of seconds or minutes before you notice anything has happened. So you have a second system watching the first and it’s only job is to detect if it hits a milestone that might be an indicator of a failure of control. That’s the kind of idea I was trying to get at.
Is a human in the loop a failsafe or a liability? If we’re approaching the point at which a human in the loop is a liability, what then? That’s more philosophical than job advice, but we are now looking to automation of security as a saving grace. And I think often it will be, but from time to time it wont, so how do you balance those two things is a big question.
I’ve come to the concurrence that all technology is dual use—meaning it can be used for good things, and it can be used for bad things.”
TR: Are you optimistic about the future of cybersecurity?
DG: That’s also a great question. If you go to a Nobel Prize-winning scientist and ask if such and such a thing is possible, and they say yes then it surely is. If they say no, you have to retain some doubt. It’s the same thing here. Am I optimistic? I’m optimistic that small failures can be automated out of practical existence. I am however a devotee of the theory of the black swan. I wrote this thing about the Rubicon—what do you do when you go over a boundary where you can’t go back? The thing that I’m not confident about is that we understand the implications of interdependent complexity—I just made that term up, but all these things are complex and depend on one another in ways we can’t fathom until later. If you look at the history of neurology, a lot of studies were only possible when someone deeply abnormal presented themselves for examination. The abnormalities allow you to know how the normal works. In a similar sense, the question in front of us now is as the size of systems exceeds our ability to comprehend them, what is the black swan event?
I’m impressed by the experiments Russia and China have done where they ask what happens if we cut off connectivity between us and the rest of the world, would we still operate? They apparently think they would, and they cared about it enough to do an experiment. It seems to me that the core source of insecurity is dependent on the stability of something else. If I’m dependent on the stability of the electric power system, I’m vulnerable to it. And anything it is vulnerable to is transitive to me. In Peru, revolutionaries’ favorite tactic was to blow up transmission towers, because nothing got the public’s attention like having all the lights go out in a city. The vulnerability to the dependencies that you don’t know you have is a feature of complexity insofar as complexity finds those dependencies in ways that often can’t be exposed except in the presence of failure or even compound failure. If you asked the FAA why do planes crash, the answer would be something along the lines of “it’s rarely for just one reason.” Many things happened, and some of them were insurmountable. Some failures caused other failures. And the result is that you lost control of the plane.
Near misses are almost as important as actual crashes. And in the airline industry, there is in fact a near-miss database run by NASA, not the FAA so it doesn’t have any regulatory component, where you as an airline can get data out of it only if you’re willing to put data into it. The price of admission is sharing what you’ve got. I think we could argue that there’s a percentage of cyber events that must be reportable, and that we should steal that idea from the airline world. So I’m optimistic in the small, I’m pessimistic in the large.
The question for you and I and Congress and everybody goes back to a thought question from Bruce Schneier, which is if we have perfect knowledge, would we characterize software vulnerabilities as sparse events? If they are sparse, then all the money you can spend fighting and fixing them is good, because everytime you take one out you diminish the store of unfixable problems. If they’re dense, it’s not worth anything to go and look for them and in fact it makes it worse because the checklist of things you had to protect yourself against now just went up by one obscure failure that you didn’t know was possible, and now all the opponents know it’s possible. What good is it when I have 6,000 vulnerabilities and you tell me to fix two of them? If I had 14 vulnerabilities and you told me to fix two of them, I’d be ecstatic.
Will artificial intelligence answer the question of whether software vulnerabilities are sparse or dense? I think that’s the question of the hour—what will AI do for us in this space, particularly if the AI is not explainable, if it can’t say why it’s doing something.
TR: When it comes to quantum computers, are you worried about the effect they could have on encryption or do you think we’re prepared?
DG: I’ve tried to understand quantum and I’ve failed. Of course, someone once told me that even if you think you understand it, you don’t. Where I work, we’re constantly looking at that question because from the intelligence community’s point of view if a general purpose quantum computer were to appear, it would change the balance of power. China especially, but also Russia, is spending a lot of real dollars on it.
I’ve tried to understand quantum and I’ve failed. Of course, someone once told me that even if you think you understand it, you don’t.”
Am I optimistic about it? My guess is that we’re a long way—a couple of decades—away from being able to simply spin up a quantum process of any size for any purpose for cheap. I think it’s harder than most anything else we’ve tried to do, harder than fusion. At the same time, specialized problems we may be able to solve—like breaking RSA. I suspect that cryptographers being as they are, that the work they’re doing on post quantum crypto will be done before there’s an emergency. In other words, I think they’ll have a new class of battleships before the enemy can sink the old class. I think you have to be wired right to be a cryptographer, I don’t think it’s a matter of study. It’s not for a lack of willpower that I can’t jump like Michael Jordan—there’s just something else. That being said, I think the crypto guys is not where this is going to fail.
Material science I think is probably going to be a place where quantum has real impact, or for that matter designing drugs or proteins. I think those are definitely places where quantum will have some impact. Will we be better off, in some sense of the phrase, if every medicine I take is different from every medicine you take because it’s been tuned to our chemistry? That has many attractions. I’ve come to the concurrence that all technology is dual use—meaning it can be used for good things, and it can be used for bad things. Obviously designer drugs fall into that category.
My guess would be that if quantum computers were to be used as weapons, I don’t know what to think if they were to be aimed at each other. What if the two of them are aimed at each other? I don’t know the answer to that.
TR: You mentioned to me that you’ve been spending a lot of time at the farmers’ market?
DG: That’s just one thing. We actually run a farm here—not a particularly big one, and that means we don’t have million-dollar equipment and all that. But one thing you can still do as a small grower, and what we’re doing today, is growing seed for small seed companies. It helps to be small and isolated because that’s the only way you can keep the gene pool pure—I grow some oddball things. I can’t say it pays really well, but it’s cash flow positive, so there you go.
We grow cut flowers and what the restaurant trade calls shelf-stable goods, like paprika and cayenne peppers that are dried, powdered, packaged, and frozen. The only cut flowers we grow are the ones that don’t travel well, because otherwise you’re competing with bigger companies. Small is beautiful but it’s also difficult to survive.
TR: How long have you been doing this?
DG: Twenty-five years. More seriously as time went on, obviously. At first I was just wondering if we could do this, and then when it started to work it eventually became all-consuming, to the point where I do my day job at night and my night job during the day, that kind of thing.
The virus crisis has affected us in some ways but not others. We had made a pretty big commitment going into the fruit and vegetable restaurant trade this year and of course that’s a complete loss. You really want to work with the chef-owned restaurants, because they tend to be quality-oriented and not price-sensitive… Over half of them haven’t made it and the rest are trying to get by. One of them kept the tables as they were, but filled half the seats with mannequins so it looks like a crowded place. That’s why we started doing the farmers’ market, we started a farm share, and one of the churches around here has a Saturday afternoon meal and we’ve been supplying them pretty well.
At first I was just wondering if we could do this, and then when it started to work it eventually became all-consuming, to the point where I do my day job at night and my night job during the day.”
Life goes on. I’m a commercial beekeeper, so there’s honey. My wife is a flower arranger. The flowers sell out every week, we can’t grow enough. We’re thinking about cutting back just to honey and flowers. I’m wasting your time, but people who go to farmers’ markets, by and large it’s not a trip to the store—it’s an outing. Impulse buys appeal to people. The stands next to us are a bakery, a lavender farm, a grass-fed beef operation… None of these are places you go to get your celery, lettuce, or other staples.
TR: Does your passion for farming have any connection to cybersecurity as an industry?
DG: No, not at all!