White House orders federal agencies to implement AI safeguards, councils
The White House unveiled a slate of new orders and requirements for federal agencies related to the use of artificial intelligence.
Vice President Kamala Harris announced the order on Thursday, saying it is designed to “strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.”
By December 1, federal agencies are required to implement safeguards when using AI. Agencies have to test and monitor how AI impacts the public and make sure to monitor algorithmic discrimination — an issue that has plagued the use of AI in healthcare, housing, education, criminal justice and many more sectors.
As an example, the White House statement cites the fact that the Transportation Security Administration uses facial recognition at all airports largely without asking or notifying people passing through. The order mandates that TSA give travelers the ability to opt out of TSA facial recognition “without any delay or losing their place in line.”
“When AI is used in the Federal healthcare system to support critical diagnostics decisions, a human being is overseeing the process to verify the tools’ results and avoids disparities in healthcare access,” the White House explained in another example.
“When AI is used to detect fraud in government services there is human oversight of impactful decisions and affected individuals have the opportunity to seek remedy for AI harms. If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.”
The new policy also includes measures forcing federal agencies to be more transparent about their use of AI. Agencies have to publicly release an annual inventory of the AI that is used and identify any ways it may impact rights or safety.
The order includes several other measures about upskilling federal workers so they know how to use AI. The White House said it plans to hire 100 AI professionals by the summer of 2024. The President’s budget for fiscal year 2025 includes $5 million to expand the General Services Administration’s government-wide AI training program, which last year had over 7,500 participants from across 85 federal agencies.
Federal agencies must also designate chief AI officers, and the Office of Management and Budget (OMB) has convened a Chief AI Officer Council “regularly” since December. AI governance boards chaired by deputy secretaries will also be required thanks to the new order.
Only the Departments of Defense, Veterans Affairs, Housing and Urban Development, and State have established these governance bodies, and every agency is required to do so by May 27, 2024.
“With these actions, the Administration is demonstrating that Government is leading by example as a global model for the safe, secure, and trustworthy use of AI,” the White House said.
The moves are a follow up to an executive order issued by President Joe Biden in October that covered how the U.S. government uses AI.
Two weeks ago, the U.S. Commission on Civil Rights held a day-long panel with representatives from the Department of Justice, Department of Homeland Security, and Department of Housing and Urban Development on the use of facial recognition technology across federal agencies.
Ilia Kolochenko, CEO at ImmuniWeb, noted that the U.S. has struggled to match the efforts by the European Union to regulate AI and how it is used in many industries. Without a federal law regulating AI, several states have taken it upon themselves to pass laws managing AI usage.
“Resultantly, compliance becomes costlier than ever, pushing smaller AI vendors and startups to be acquired by tech giants with deep pockets that can afford to hire dozens of full-time lawyers, in addition to external law firms, eventually creating an AI oligopoly,” Kolochenko said.
“An overarching AI legislation will be greatly beneficial for sustainable innovation and long-term competitiveness of US tech firms on the global AI market.”
Clar Rosso, CEO of cybersecurity non-profit ISC2, echoed that sentiment, warning that many experts are concerned about the lack of regulation around AI.
The actions taken by the Biden administration are a step in the right direction and puts someone in charge of AI usage within the federal government, Rosso said, but it “isn't a silver bullet.”
“To work, we need collaboration, documentation of successes and failures, and both high-quality and high-quantity talent. It's clear that the administration recognizes how dangerous biases baked into AI tools can be, but we're going to need to put our heads together to determine how and when we deem AI/LLM models unbiased and ready for release,” Rosso explained.
“What do the tests or benchmarks look like for that? How frequently are they applied? There have been conversations for years about AI ‘fairness’ metrics and which metrics are the best to use to determine if AI is handling all of its data equally, but we haven't quite cracked the code on that yet – it's an area we need to focus on and one the Biden Administration should continue to support.”
For Rosso, one of the most important parts of the order is the measure requiring the hiring of 100 AI workers. Members of ISC2 have reported “extreme concern” over the last year about the AI skills gap, Rosso said.
Jonathan Greig
is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.