Ukraine warns of growing AI use in Russian cyber-espionage operations
MUNICH, Germany — Russia is increasingly using artificial intelligence to analyze data stolen in cyberattacks, making its operations more precise and effective, according to Ukrainian cyber officials.
For years, Russian hackers have exfiltrated vast amounts of data from Ukrainian government agencies, military personnel, and ordinary citizens. However, analyzing and utilizing these large datasets has posed a challenge. Now, AI is helping to bridge that gap, according to Ihor Malchenyuk, director of the cyberdefense department at Ukraine’s State Service of Special Communications and Information Protection (SSCIP).
Speaking at the Munich Cyber Security Conference (MCSC) on Thursday, Malchenyuk said that as soon as Russian hackers gain access to a victim’s system, they use machine learning models to filter out what is most essential from the victim’s mailbox. They then use this data to tailor targeted phishing campaigns, he added.
In the latest example, Ukrainian military personnel have been targeted on encrypted messaging platforms like Signal, receiving highly customized messages designed to deceive them into clicking malicious links. Once accessed, these links can compromise their accounts and expose sensitive information, said Natalia Tkachuk, head of cyber and information security at Ukraine’s National Security and Defense Council.
"The attacks are becoming increasingly sophisticated," Tkachuk told Recorded Future News on the sidelines of MCSC. "Hackers now personalize phishing messages with the recipient’s name, military rank, and even official documents they were previously involved with."
Ukraine is also employing more AI in its cybersecurity efforts, Tkachuk said, but declined to disclose details.
According to a recent report by SSCIP, Russian cyberattacks against Ukraine are increasingly focused on cyber-espionage, with attackers using compromised accounts and phishing emails as primary entry points.
Ukrainian cyber officials have also observed growing collaboration between Russian state-backed hackers and cybercriminal groups. In these operations, financially motivated hackers infiltrate victims’ systems to steal funds and then pass on access and stolen data to state-sponsored operatives. This data is then analyzed using AI, according to Tkachuk.
Other countries and big tech companies have previously raised similar concerns about the use of AI by Russian threat actors.
Earlier in November, U.K. cabinet minister Pat McFadden said that Russia is trying to use artificial intelligence to enhance cyberattacks against the country’s infrastructure.
In a report last year, Microsoft said that state-backed hackers from Russia, China, and Iran have been using tools from OpenAI to support their malicious cyber activities.
According to Microsoft, these threat actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks. The identified OpenAI accounts associated with them were terminated, Microsoft said.
Another way Russian threat actors could use AI is by inserting deepfake voice clips into real videos of politicians, said Ginny Badanes, senior director of Democracy Forward at Microsoft. This strategy is highly effective, as the clips can be difficult to detect, she added at MCSC on Friday.
READ MORE: Munich Cyber Security Conference 2025 Live Updates
Daryna Antoniuk
is a reporter for Recorded Future News based in Ukraine. She writes about cybersecurity startups, cyberattacks in Eastern Europe and the state of the cyberwar between Ukraine and Russia. She previously was a tech reporter for Forbes Ukraine. Her work has also been published at Sifted, The Kyiv Independent and The Kyiv Post.