meta
Image: dole777 via Unslpash

Privacy advocates see risk in new Meta policy that uses AI chats to serve targeted ads

Privacy experts are warning that a new Meta policy that tweaks ads based on users’ interactions with its AI features could be a slippery slope in eroding privacy protections around the technology.

The new feature, which was announced October 1 and rolled out Tuesday, will “start personalizing content and ad recommendations on our platforms based on people’s interactions with our generative AI features,” the social media giant said in a blog post

Users will not be able to opt out of the sharing though it will only apply to those using Meta AI, which is integrated into Facebook, Instagram, WhatsApp and Messenger. 

The move comes at a moment when consumers’ engagement with AI chatbots is skyrocketing, while experts and lawmakers probe how the technology may spur violence, self-harm and mental health problems.

In its announcement, Meta offered vague assurances about user privacy, saying that “when people have conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership, as always, we don’t use those topics to show them ads.”

Arielle Garcia, chief operating officer at digital advertising watchdog Check My Ads, said Meta has in the past found workarounds to get past broadly-worded privacy promises. She questioned whether sensitive information from chats will be used to train AI models or to optimize creative content that is shown to consumers, for example, by targeting them with ads featuring people who share their race or gender. 

It also remains to be seen how the policy will address “proxy audiences” — for example, the chatbot might not share a users’ diabetes diagnosis explicitly but could send ad signals about a user discussing World Diabetes Day, Garcia said.

Meta declined to comment on critics’ concerns about its new policy, pointing to its blog post.

Sensitivity of chats

Although the technology is relatively new, many people share sensitive information about mental health, religion, relationships, financial concerns and physical ailments with chatbots.

In April, Sam Altman, the CEO of OpenAI, said that he worries about the legal risks some users take on when sharing their deepest secrets with ChatGPT as they would with a doctor or lawyer.

“We don't have [privilege] yet for AI systems, and yet people are using it in a similar way, and that is a place where I think society will have to come up with a new sort of framework,” Altman said at the time.

Nathalie Maréchal, co-director of privacy and data at the Center for Democracy and Technology, said Meta’s new policy is risky because of how naive many people are about chatbots’ intentions.

“People think that they are interacting in a completely private, secure environment, which is false,” Maréchal said. 

“They're engaging with a statistical word prediction software that while very impressive does not actually represent any kind of a sentient entity… much less have a person's best interest at heart.”

Even if Meta works hard to filter chats and remove sensitive information, auditing millions of users’ statements won’t be easy, privacy advocates say. 

“A lot of these companies argue that safety regulations would be impossible to effectively implement because of how unpredictable chatbot outputs are so certainly the reassurance on the other end of ‘we have this perfect system for filtering out sensitive content when we're using it for advertising’ just doesn't seem that persuasive to me,” said Hayden Davis, a legal fellow at the Electronic Privacy and Information Center who focuses on platform governance and accountability.

A ‘direct financial incentive’

The fact that consumers cannot opt out of the sharing is notable, Davis said, and raises questions relating to knowledge and consent.

“We know exactly why Meta is using automatic opt-in and it's because they know that no consumer who was actually fully informed of what Meta is doing would willingly opt into this,” he said.

Meta has said it is notifying users of the change via in-product notifications and emails, but many people fail to read and digest such alerts.

Davis also worries that because chatbot interactions will become a profit center, Meta will be incentivized to make engagement more addictive for consumers, which he said has proven to be dangerous.

The estate of a Connecticut woman whose son allegedly murdered her in August is suing OpenAI and Microsoft, contending that the man’s intense interactions with ChatGPT fueled his perception that his mother was working against him.

In April, 16-year-old Adam Raine took his own life after extensive interactions with ChatGPT, which his parents have said helped their son write his suicide note.

The implications for child safety are especially stark. A recent survey by Common Sense Media found that more than half of teens are using AI companions a few times a month.

“What we're seeing with AI-induced psychosis is a lot of that stems from when people are overusing these chatbots,” Davis said. “If Meta is using chatbot interactions for advertising, that means that Meta now has a very direct financial incentive… to design its AI products to manipulate users into both spending more time talking to the chatbots and into divulging ever more personal information to them.”

Critics also highlight Meta’s track record of privacy and advertising violations. The company was ordered by the Federal Trade Commission in 2019 to pay a record-breaking $5 billion penalty and submit to new privacy-related restrictions, and more recently has been accused of knowingly making money off of advertisements it knew were scams..

“When you then layer in promises of even greater precision when it's the same company that inhibited efforts to prevent that scale of scams on their platform, it's just incredibly concerning,” Garcia said. “It's likely to result in even more scam ads being served to even more users susceptible to those scams.”

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
Recorded Future
No previous article
No new articles
Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.