FBI AI

FBI warns of adversaries using AI in influence campaigns, cyberattacks

The FBI is paying increased attention to foreign adversaries’ attempts to utilize artificial intelligence as part of influence campaigns and other malicious activity, as well as their interest in tainting commercial AI software and stealing aspects of the emerging technology, a senior official said Friday.

The two main risks the bureau sees are “model misalignment” — or tilting AI software toward undesirable results during development or deployment — and the direct “misuse of AI” to assist in other operations, said the official, who spoke on the condition of anonymity during a conference call with reporters.

The official said foreign actors are “increasingly targeting and collecting against U.S. companies, universities and government research facilities for AI advancements,” such as algorithms, data expertise, computing infrastructure and even people.

Talent, in particular, is “one of the most desirable aspects in the AI supply chain that our adversaries need,” according to the official, adding the U.S. “sets the gold standard globally for the quality of research development.”

The warning came just days after FBI Director Christopher Wray rang the alarm bell about China’s use of the technology.

“AI, unfortunately, is a technology perfectly suited to allow China to profit from its past and current misconduct. It requires cutting-edge innovation to build models, and lots of data to train them,” he said at the FBI Atlanta Cyber Threat Summit. U.S. officials say the regime steals intellectual property and harvests large amounts of foreign data through illicit means.

In addition to nation-state threats, U.S. officials and cybersecurity researchers say criminals are leveraging AI as a force multiplier to generate malicious code and craft persuasive phishing emails, as well as develop advanced malware, reverse-engineer code and create “synthetic content” such as deepfakes.

“AI has significantly reduced some technical barriers, allowing those with limited experience or technical expertise to write malicious code and conduct low-level cyber activities simultaneously,” the FBI official told reporters. “While still imperfect at generating code, AI has helped more sophisticated actors expedite the malware development process, create novel attacks and enabled more convincing delivery options and effective social engineering.”

The official did not provide specific examples of those activities. The official also said the bureau has not brought any AI-related cases to court, and declined to put one aspect of the dangers posed by the technology above another.

“We don't necessarily have a particular prioritization of the threats,” the official said. “As our mission dictates we are looking at all of these threats equally across our divisions.”

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles

Martin Matishak

Martin Matishak

is the senior cybersecurity reporter for The Record. Prior to joining Recorded Future News in 2021, he spent more than five years at Politico, where he covered digital and national security developments across Capitol Hill, the Pentagon and the U.S. intelligence community. He previously was a reporter at The Hill, National Journal Group and Inside Washington Publishers.