Jen Easterly
Jen Easterly speaks in November 2022. IMAGE: CISA

CISA director: AI cyber threats the ‘biggest issue we're going to deal with this century’

A top U.S. cyber official expressed grave concerns about the security implications of generative artificial intelligence at a forum on Thursday, warning that legislative action is needed to regulate its use.

Cybersecurity and Infrastructure Security Agency Director Jen Easterly called popular AI tools like ChatGPT "the biggest issue that we're going to deal with this century” due to the variety of ways they can be used by cybercriminals and nation states.

Last year, the tool from Microsoft-backed OpenAI kicked off an artificial intelligence arms race among the biggest tech companies. Regulators have been struggling to keep up ever since.

The European Union police force Europol last week warned about how chatbots like ChatGPT could be used for phishing attempts, the spread of disinformation and cybercrime.

“If you think about the most powerful weapon of the last century, it was nuclear weapons. They were controlled by governments and there was no incentive to use them. There was a disincentive to use them,” Easterly told an audience at the Atlantic Council.

“[AI is] the most powerful technology capability and maybe weapon of this century. We do not have the legal regimes or the regulatory regimes to be able to implement them safely and effectively. And we need to figure that out in the very near term.”

She went on to compare the current use of generative AI to the “original sin of the internet” — that it was not built with security in mind and forced people to create a “multibillion dollar cybersecurity industry to bolt on to it.”

Easterly said it also resembled social media in the way that it was introduced to society with little examination of the potential ramifications. She explained that social media “moved fast and broke things” but now we’re “breaking the mental health of our kids” through it as well.

“We are hurtling forward in a way that I think is not the right level of responsibility — implementing AI capabilities in production, without any legal barriers, without any regulation,” she said.

AtlanticCouncil.jpg
From right: Marshall Miller, principal associate deputy attorney general at the United States Department of Justice; CISA Director Jen Easterly; Nathaniel Fick, ambassador at large for cyberspace and digital policy at the State Department; and Acting National Cyber Director Kemba Walden. Image: Bureau of Cyberspace and Digital Policy via Twitter

“And frankly, I'm not sure that we are thinking about the downstream safety consequences of how fast this is moving and how bad people like terrorists … or cybercriminals or adversary nation states can use some of these capabilities, not for the amazing things that they can do, but for some really bad things that can happen.”

Easterly added that she has been thinking hard about ways the agency can “implement certain controls around how this technology starts to proliferate in a very accelerated way.”

The CISA Director’s comments come days after Italy’s data protection agency temporarily banned ChatGPT, alleging the powerful artificial intelligence tool has been illegally collecting users’ data and failing to protect minors.

Italian officials also invoked a data breach on March 20 in which the payment information of ChatGPT subscribers was leaked, as well as some chat records. On Monday, Germany’s data protection commissioner told a local news outlet that they were considering a similar ban due to data security concerns. Several other European countries are mulling action.

More than 1,000 technology experts and investors, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, signed a petition urging Microsoft, Google and other artificial intelligence industry stakeholders to take a six-month moratorium on the training of AI systems.

For months, cybersecurity experts and researchers have raised alarms about how generative AI tools were increasingly being adopted within the cybercriminal world, lowering the barrier for entry by helping attackers write phishing emails, malware and even ransomware.

Hackers with cybersecurity firm Claroty used ChatGPT to write code for an attack framework that allowed them to take control of a system at Trend Micro’s Pwn2Own competition, winning $123,500 for the research.

BullWall co-founder Morten Gammelgaard told The Record that natural language AI like ChatGPT will “explode the efficacy of phishing overnight.” Right now, cybercriminals have to write generic phishing emails and send them out to hundreds of thousands of people with the hopes that one clicks on the link inside.

Hackers can also perform “spearphishing” where they research a target and send customized emails that use specific lures in an effort to get them to click on links or attachments.

“With AI you get the best of both worlds. Mass email campaigns that are highly targeted at a scale that can produce 100,000 custom attacks instantly. This will explode cybercrime, and there is an arms race between the largest companies on the planet, Google, Apple, Microsoft and others throwing billions of dollars to rush their AI apps out, often putting aside safety and use cases in exchange for being first,” Gammelgaard said.

“They have everything at stake if they lose their footholds. But the Russians and Chinese also are secretly funding billions of dollars into AI, but for cyber espionage, ransom and attacks. You can't stop it."

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles
Jonathan Greig

Jonathan Greig

is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.