OpenAI takes down ChatGPT accounts linked to state-backed hacking, disinformation
State-backed threat actors from a handful of countries are now using ChatGPT for illicit purposes ranging from malware refinement to employment scams and social media disinformation campaigns, OpenAI said this week.
The latest report on malicious uses of the product documents efforts by accounts connected to China, Russia, North Korea, Iran, and the Philippines to harness ChatGPT in dangerous ways. It also exposes efforts by suspected crime groups in Cambodia to lure people into the cyber scamming industry.
The illicit uses of ChatGPT were largely split into three buckets: social media comment generation; malware refinement and cyberattack assistance; and foreign employment scams — with four of the 10 uses attributed to actors based in China.
Stirring the social media pot
OpenAI said it banned dozens of accounts that it saw using ChatGPT to bulk generate social media posts consistent with the activity of a covert influence operation.
Many of the China-based accounts issued prompts in Chinese and sought responses in English on a variety of topics, including:
- The shutdown of USAID
- Various sides of divisive topics within U.S. political discourse
- Backlash towards Taiwan
- Pakistani activist Mahrang Baloch, who has publicly criticized China’s investments in Balochistan
The accounts sought to create social media comments in English, Chinese and Urdu that were then found being posted on TikTok, X, Reddit, Facebook and other social media platforms. These accounts were seen creating initial comments and then shifting to other accounts that would reply to the same comments.
Most of the comments and posts garnered few legitimate views and little engagement.
Russian ChatGPT accounts were also seen generating German-language content about this year’s federal elections in Germany and criticizing NATO, as well as the U.S., on platforms like X and Telegram.
OpenAI found similar social media comment generation campaigns from threat actors in Iran covering a variety of geopolitical topics. The company also banned accounts they found mass-generating comments in the Philippines backing the policies of President Bongbong Marcos.
Malware refinement
OpenAI attributed some accounts on the platform to nation-state hacking groups like APT5 and APT15, known respectively as Keyhole Panda and Vixen Panda.
The accounts generated content related to brute-forcing passwords and sought assistance with writing scripts that allowed them to try multiple username and password combinations.
They also sought assistance with other efforts like scanning servers for specific ports, conducting AI-driven penetration testing and generating code that would automate operations on social media platforms.
OpenAI noted the hackers appeared to already have previously-built code that remotely controlled Android devices and could simulate swipes and clicks on social media platforms like Twitter, Facebook, Instagram, and TikTok.
The groups asked ChatGPT multiple questions about the U.S. defense industry, networks used by the U.S. military and government technology.
“Multiple threat actors sought publicly available information on US Special Operations Command, satellite communications technologies, specific ground station terminal locations, government identity verification cards, and networking equipment, including how the hardware and software technology works,” OpenAI said.
“We disabled all accounts associated with this activity and shared relevant indicators with industry partners.”
The company claimed their investigation “provided unusually broad visibility into a network of PRC-affiliated threat actors and their operational workflows — including tool development, open source research, and infrastructure profiling.”
But they noted they “found no evidence that access to our models provided these actors with novel capabilities or directions that they could not otherwise have obtained from multiple publicly available resources.”
Russian hackers were seen using ChatGPT accounts to develop and refine Windows malware, debug code and set up command-and-control infrastructure. The Russians seen using the tool were stealthy, deploying temporary email addresses to sign up for ChatGPT accounts and limiting each account to one conversation about incremental improvements to their code.
The actors frequently abandoned accounts and created new ones, slowly building on each iteration of code with ChatGPT. OpenAI named the malware “ScopeCreep” and said it was used to infect video game players.
ScopeCreep gives hackers the ability to escalate privileges, evade detection, steal credentials and notify the attacker through Telegram.
“At this stage, the impact of ScopeCreep may have been mitigated by quick reporting and close collaboration with industry partners who were able to take down the malicious repository. We banned the OpenAI accounts used by this adversary,” the researchers said.
“Additionally, although this malware was likely active in the wild, with some samples appearing on VirusTotal, we did not see evidence of any widespread interest or distribution.”
Employment scams
As part of its wide-ranging IT worker scheme, North Koreans used ChatGPT profusely to generate fake resumes and personas that could be used to apply for jobs, the company said.
Threat actors allegedly tied to North Korea were banned after being caught using the program to perform work tasks, operate hardware and more.
“We detected two distinct strands of activity, likely representing two types of operator: core operators, and contractors,” OpenAI explained. “The core operators attempted to automate resumé creation based on specific job descriptions, skill templates, and persona profiles, and sought information about building tools to manage and track job applications.”
The account holders used ChatGPT as a research tool to help inform remote-work setups, the researchers said, noting they used it to generate text concerning the recruitment of real people in the U.S. to take delivery of company laptops, which would then be remotely accessed by the threat actors or their contractors.
The North Koreans allegedly used ChatGPT to research a number of technical tools that could be used to circumvent corporate security measures and “maintain a persistent, undetected remote presence.”
OpenAI also found accounts based in Cambodia focused on generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole. Cambodia has been the epicenter of the cyber scam industry, housing dozens of facilities into which people from across Asia, Africa and South America are trafficked and held against their will and forced to conduct a variety of fraudulent schemes online.
The now-banned ChatGPT accounts from Cambodia wanted to create messages offering people high salaries for trivial tasks such as liking social media posts. The accounts sought to translate sentences from Chinese into multiple languages.
OpenAI said the operation “appeared highly centralized and likely originated from Cambodia.”
Jonathan Greig
is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.