OpenAI shuts down accounts linked to 5 nation-state hacking groups
OpenAI, the artificial intelligence company behind ChatGPT, said on Wednesday that it terminated accounts on its services being used by threat actors linked to China, Russia, Iran and North Korea.
The announcement was made in collaboration with Microsoft — one of the company’s major investors — which released a report on Wednesday that detailed how various state-affiliated hacking groups are experimenting with large language models (LLMs) to potentially carry out cyberattacks.
Although the companies said they have “not identified significant attacks” using the LLMs they examined, they warned that state-linked hackers are looking for ways to use AI to improve their attack techniques.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft researchers said.
The companies said they observed two China-linked threat actors — which they referred to as Charcoal Typhoon and Salmon Typhoon — using LLMs for reconnaissance, as well as identifying coding errors and improving their operational command techniques. Salmon Typhoon has historically targeted U.S. defense contractors, government agencies, and entities within the cryptographic technology sector, while Charcoal Typhoon has targeted a range of critical infrastructure sectors located mainly in Asia.
Forest Blizzard, Microsoft’s name for the Russian military intelligence hacking unit that’s also known as BlueDelta, Fancy Bear and APT28, was observed “interacting with LLMs to understand satellite communication protocols, radar imaging technologies, and specific technical parameters.”
The group, which has been highly active in carrying out cyberattacks against Ukrainian entities in support of Russia’s military goals, may be targeting satellite and radar technologies in relation to conventional military operations in Ukraine, Microsoft and OpenAI said.
In addition to Russia and China, the companies said they observed one group linked to North Korea — Emerald Sleet, which overlaps with Kimsuky — and one group linked to Iran — Crimson Sandstorm, which has ties to Iran’s Islamic Revolutionary Guard Corps — using LLMs to support their social engineering efforts, assist with vulnerability research, and develop code to evade detection, among other things.
As part of Wednesday’s report, Microsoft published a set of principles to govern its efforts to prevent other state-backed hackers from abusing its AI models. The principles include identifying and disrupting the malicious use of the technologies, notifying other AI service providers of their findings, collaborating with other stakeholders to respond to risks, and publicly documenting how threat actors are using its systems and what measures the company takes against them.
Adam Janofsky
is the founding editor-in-chief of The Record from Recorded Future News. He previously was the cybersecurity and privacy reporter for Protocol, and prior to that covered cybersecurity, AI, and other emerging technology for The Wall Street Journal.