OpenAI
Image: Zac Wolff via Unsplash

OpenAI models used in nation-state influence campaigns, company says

Threat actors linked to the governments of Russia, China and Iran used OpenAI’s tools for influence operations, the company said Thursday. 

In its first report on the abuse of its models, OpenAI said that over the last three months it had disrupted five campaigns carrying out influence operations. 

The groups used the company’s tools to generate a variety of content — usually text, with some photos — including articles and social media posts, and to debug code and analyze social media activity. Multiple groups used the service to create phony engagement by replying to artificial content with fake comments.

“All of these operations used AI to some degree, but none used it exclusively,” the company said. “Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts, or memes copied from across the internet.”

The rise of generative AI has sparked fears that the tools will make it easier than ever to carry out malicious activity online, like the creation and spread of deepfakes. With a spate of elections this year, and stark divisions between China, Russia, Iran and the West, experts have raised alarms. 

According to the company, however, the influence operations have had little reach, and none scored higher than a 2 out of 6 on a metric called the “Breakout Scale”, which measures how much influence specific malicious activity likely has on audiences. A recent report by Meta on influence operations reached a similar conclusion about inauthentic activity on its platforms.  

OpenAI  detected campaigns by two different Russian actors — one an unknown group it dubbed Bad Grammar and the other Doppelgänger, a prolific malign network known for spreading disinformation about the war in Ukraine. It also disrupted the activity of the Chinese group Spamouflage, which the FBI has said is tied to China’s Ministry of Public Security. 

The Iranian group the International Union of Virtual Media (IUVM) reportedly used the tools to create content for their website, usually with an anti-US and anti-Israel focus, and an Israeli political campaign management firm called STOIC was also discovered abusing the models, creating content “loosely associated” with the war in Gaza and relations with Jews and Muslims. 

OpenAI disrupted four Doppelgänger clusters, which used generative AI to create short text comments in English, French, German, Italian and Polish, and to translate articles from Russian and to generate text about them for social media. Another cluster generated articles in French, while a fourth used the technology to take content from a Doppelgänger website and synthesize it into Facebook posts. 

The report also highlights instances where the company’s software prevented threat actors from achieving their goals. For example, Doppelgänger tried to create images of European politicians but was stopped, and Bad Grammar posted generated content that included denials from the AI model.

  “AI can change the toolkit that human operators use, but it does not change the operators themselves,” they said. “Our investigations showed that they were as prone to human error as previous generations have been.” 

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles
James Reddick

James Reddick

has worked as a journalist around the world, including in Lebanon and in Cambodia, where he was Deputy Managing Editor of The Phnom Penh Post. He is also a radio and podcast producer for outlets like Snap Judgment.