nita farhany

What brain privacy will look like in the age of neurotech

Nita Farahany is one of the country’s foremost experts on neural data privacy and is the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. As chair of the Uniform Laws Commission Study Committee on Mental Privacy she is leading work honing a proposal for how states should regulate neural privacy.

Farahany has closely studied the evolution of neural technology and what the implications are for privacy and autonomy. She spoke with Recorded Future News about whether brain data will be commodified, how your brain can be hacked and the role artificial intelligence plays in allowing internal speech to be decoded.

This conversation has been edited for length and clarity.

Recorded Future News: You've spent a lot of time researching the next frontier of neurotechnology and the privacy implications of the expanding neurotech market. What do you see as the biggest dangers posed, and how should the government think about regulation of this field?

Nita Farahany: In nearly every aspect of our lives we have entered into digital cages [where] we are tracking nearly everything — our movements, everything we say, everything we do, every online purchasing behavior, every online website that we visit. 

The biggest risk I see is [the brain] no longer being a safe space — that it suddenly becomes up for grabs, like everything else is up for grabs. It's so fundamental to what it means to be human and the experience of being human, that if we don't safeguard it, if we don't put into place the right set of laws and protections and design principles we risk not just losing our privacy, but losing what it means to be human.

RFN: I was speaking with an expert who told me that she thinks it will become increasingly hard for people to avoid giving up neural data. She said it is possible that down the road companies could make earbuds, where if you want to use them you have to allow for neural data collection. Do you see the marketplace evolving this way?

NF: It could. What I see, and what I talk about in my book, is the coming age of widespread neural interface. And what I mean by that is up until now, most of the neurotechnology that's been on the market has either been implanted neurotechnology for medical purposes or it has been entertainment and incredibly narrow. 

At the same time, the sensor market has been exploding and the capabilities of AI have been exploding and so people are increasingly used to wearing a watch with smart features that pick up their heart rate or temperature. There are rings that do that. Devices that have come to the market, like glasses and virtual reality devices — these are packed with sensors, from cameras to everything else, and the traditional ways that we've interacted with our technology, like using a keyboard or a mouse, doesn't make sense for the new computing platforms that are coming out. If you have on augmented reality glasses, using a keyboard or a joystick or a mouse adds a layer of friction between you and those devices. 

What Meta has just introduced, what Apple has now made native as part of its accessibility protocols, is to enable picking up your intentions through neural signals and sensors that AI decodes to allow you to navigate through all of that technology. So I think the first generation of most of these devices will be optional. That is, you can get the smart watch without the neural band, you can get the airpods without the EEG [electroencephalogram] sensors in them. But just like you can't get an Apple watch now without getting an Apple watch with a heart rate sensor, second and third generation of these devices, I think your only option will be to get the devices that have the neural sensors in them. And the only way that you can navigate through using an augmented reality headset or a virtual reality set headset, or eventually, potentially even any other computing platform, will be neural interface technology. 

RFN: An analogy would be how we've all decided we want to use Google, so we'll just allow our data to be tracked.

NF: Yeah, and it's a free service, right? It's a free service that we understand, that we pay for with our data, and it's powering an ad tech ecosystem. Is it necessary for companies to commodify your brain data in order to be able to provide you services? No. We can build different business models. The question is will we. I think it will require that there actually be demand side pressure, both from laws, but also from consumers.

It's notable that the first products on the marketplace aren't doing it that way, right? If you look at Meta’s neural band, they've decided to keep brain data on devices rather than commodifying that data for now. That's a term of service that could quickly change.

RFN: You have said that in the future wearables may allow bosses to see how workers are reacting to them, or police to see a criminal suspect's reaction to a crime scene photo. Can you talk through these examples and others showing how our thoughts will no longer be private, how emotions will be able to be read?

NF: I don't think every thought you think, through simple EEG sensors, is going to become decodable. But I think a lot of what you think will become decodable, and a lot more than just emotions.

Increasingly there is priming and response that you can see with EEG. One of the examples I use in the book is how there are reports in China where workers who are required to wear these EEG headsets have been presented with communist messaging, and then their reaction to that communist messaging in the workplace is powerful evidence that the state can gather against them to see if they negatively react. Imagine you want to know ‘how does this person react to an image of the boss?’ They claim that they really like that presentation. What's their real time reaction to it? There's a lot of information that you can start to expect, that may be mined from people's brains. Are they paying attention, or is their mind wandering? Metrics, and kind of proxies of productivity, to more granular information about what a person is thinking or feeling.

RFN: We're already in the place where neural tech can read stress. How far out do you think these more granular capabilities are?

NF: Every time somebody says, ‘Oh, that's five or 10 years away,’ a few months later, there's some leap in capabilities and AI technologies that makes them take it all back.  

RFN: What role is AI playing?

NF: AI predicts the next token. If you've been trained on the entire corpus of human text and knowledge, you know that when a person thinks ‘close the’ that next word is most likely door or window. There's a prediction for what most likely comes next. So if you're decoding a brain state and you're able to reliably decode ‘close’, your ability to predict ‘close the door’ becomes increasingly more powerful as AI becomes more powerful. And so going from having to translate every brain signal to having an incredibly powerful predictive machine that can predict, once you have some brain signal, what the rest of the thing is that a person is thinking becomes quite easy, and that is really powerful. 

For example, Mark Zuckerberg was showing the neural band and how he could type pretty efficiently and quickly with it just by thinking about typing or having the tiniest movement. He was maybe up to 45 to 50 words a minute. That's, again, using the power of generative AI, of predicting the next token. It’s like a very powerful autocomplete for your mind. 

RFN: In your book, you also talk about how unlocking our brains could open our mind up to targeted assaults and hacking. Can you talk about how that could play out and how it can be prevented?

NF: There's a couple of ways to think about hacking. One is getting access to what you're thinking and another one is changing what you're thinking. One of the now classic examples in the field is how researchers were able to, when somebody was using a neural headset to play a video game, embed prompts that the conscious mind wouldn't see to be able to figure out what the person's PIN code and address were for their bank account and mailing address. In much the same way that a person's mind could be probed for how they respond to Communist messaging, a person's mind could be probed to see recognition of a four digit code or some combination of numbers and letters to be able to try to get to a person's password without them even realizing that's what's happening. 

Some of these devices, especially the implanted ones, are read-write devices, where they can give feedback directly into the brain. One of the examples I talked about in the book is a patient named Sarah who had severe depression. Researchers were able to track what the neural signals were when she was most symptomatic and then interrupt those signals through direct neural stimulation that would prevent the firing of those particular neurons, which changes the brain. The question again is, how secure is that? And could somebody hack into those kinds of electrodes and make changes to what a person is feeling or experiencing? And the answer is, it's only as secure as cybersecurity measures are, and there's not a lot of cybersecurity measures being put into these devices.

RFN: And so taking that to the next step tell me what you believe mind warfare could look like?

NF: It's not impossible to imagine that if a weapon could be targeted directly at a person's brain, that it would be possible to induce brain damage and to disorient them. I was at a conference on neurotechnologies and there was a guy from a startup company who was trying to get me to try on one of their devices. He put on these earphones and showed me an app, and he said, ‘Pick your experience. Do you want to feel drunk?’ I pushed the button and instantly experienced the spins, a complete vertigo. It was extraordinary. I was stunned that you could target and induce an experience like that from something as simple as headphones paired with a device. 

If a device can do that so precisely then it is not hard to imagine the development or investments in a space that would target and hack people's brains, even remotely, with signals that could do that, rather than through headphones that you willingly put on.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
Recorded Future
No previous article
No new articles
Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.