White House releases AI ‘bill of rights’ blueprint, calling for privacy protections
Image: Ana Lanza
Andrea Peterson October 4, 2022

White House releases AI ‘bill of rights’ blueprint, calling for privacy protections

Andrea Peterson

October 4, 2022

White House releases AI ‘bill of rights’ blueprint, calling for privacy protections

The White House’s Office of Science and Technology released a blueprint for an artificial intelligence “Bill of Rights” Tuesday — outlining five protections for design and deployment of the technology, but laying out few concrete commitments. 

Among the blueprint’s areas of focus is data privacy, which researchers have long warned could be further compromised by artificial intelligence.

“These technologies can drive great innovations, like enabling early cancer detection or helping farmers grow food more efficiently,” the administration said in a press release. “But in the United States and abroad, people are increasingly being surveilled or ranked by automated systems in their workplaces and in their schools, in housing and banking, in healthcare and the legal system, and beyond.”

Those automated decisions can magnify existing inequities, the administration warned. The “bill of rights” includes protections against algorithmic “discrimination” and abusive data practices as well as proposes the rights of users to know when an automated system is being used, to “opt out” from certain certain tools, and access to human intervention.

The principles are not enforceable, but the administration pointed to work across the federal government on education, housing, and consumer protection in line with the blueprint. Some of that work is already in progress — including the Federal Trade Commission’s exploration of rules related to lax data security and commercial surveillance.

Center for Democracy and Technology President and CEO Alexandra Reeve Givens said the blueprint would be more effective if “built on a foundation set up by a comprehensive federal privacy law,” but praised the framework’s focus on privacy protection and the engagement of federal agencies.

“Federal agencies can play an important role in improving standards for AI audits, and ensuring that any entity using AI — from employers to lenders, landlords, schools, benefits programs, and more — understands the risks and their responsibility to avoid them,” Reeve Givens said. “The government can also lead by example, by reforming its procurement policies and engaging in oversight of agency use.”

ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, said the administration’s announcement of concrete actions by some agencies was encouraging, but called for further protections for how algorithms are used in policing and intelligence.

“The federal government must enforce these rights in the law enforcement and national security contexts, where the harms to people from automated systems are well-documented and more common and severe for marginalized groups, and where people’s liberty and due process rights are routinely at stake,” she said in a press release.

The U.S. intelligence community and Defense Department both follow their own previously set principles regarding ethical artificial intelligence use. The White House referenced those rules, which are similar but not identical, in its announcement.

Andrea Peterson (they/them) was a senior policy correspondent at Recorded Future News and a longtime cybersecurity journalist who cut their teeth covering technology policy ThinkProgress (RIP), then The Washington Post from 2013 through 2016, before doing deep dive public records investigations at the Project on Government Oversight and American Oversight. Their work has also been published at Slate, Politico, The Daily Beast, Ars Technica, Protocol, and other outlets. Peterson also produces independent creative projects under their Plain Great Productions brand and can generally be found online as kansasalps.