It’s time to rethink the national vulnerabilities database for the AI era, senators say
Critical vulnerabilities in AI systems can be very different from those that affect regular software, so the federal government must update how it tracks such issues, two senators said Wednesday.
Sens. Mark Warner (D-VA) and Thom Tillis (R-NC) are proposing legislation that would require changes in the National Vulnerability Database (NVD), the federal repository of information about flaws in computer software and hardware.
The goal is “to improve the tracking and processing of security and safety incidents” related to AI, said the senators, who are the co-chairs of the Senate Cybersecurity Caucus.
The bill would require the National Institute of Standards and Technology, which maintains the NVD, to update it to reflect “the ways in which AI systems can differ dramatically from traditional software, including the ways in which exploits developed to subvert AI systems (a body of research often known as ‘adversarial machine learning’ or ‘counter-AI’) often do not resemble conventional information security exploits.”
The Cybersecurity and Infrastructure Security Agency (CISA) also would have to either update the current Common Vulnerabilities and Exposures (CVE) Program or create a new process for delineating security flaws in AI technology. CVE numbers are assigned to individual bugs, and those are incorporated into the NVD.
The senators’ announcement comes as the current NVD program is in flux, as NIST sorts out the resources assigned to it. The agency said earlier this year that users could expect backlogs in processing bugs. A spokesperson for Warner noted that NIST has announced an industry consortium to help modernize the NVD.
The senator “supports full funding for NIST to, among other things, address the current issues," the spokesperson said.
The legislation also would establish an Artificial Intelligence Security Center at the National Security Agency “to provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption,” Warner and Tillis said.
Martin Matishak contributed to this story.
Joe Warminsky
is the news editor for Recorded Future News. He has more than 25 years experience as an editor and writer in the Washington, D.C., area. Most recently he helped lead CyberScoop for more than five years. Prior to that, he was a digital editor at WAMU 88.5, the NPR affiliate in Washington, and he spent more than a decade editing coverage of Congress for CQ Roll Call.