Rite Aid logo

FTC bans Rite Aid from using AI facial recognition following abuses

Rite Aid will not be allowed to use facial recognition technology for five years as part of a proposed settlement announced Tuesday by the Federal Trade Commission (FTC), which accused the pharmacy chain of using an often inaccurate AI-powered facial recognition database to profile and harass customers.

The FTC alleged that between 2012 and 2020 Rite Aid used the database to identify customers it believed were shoplifters or “dishonest,” and forced employees to stalk and sometimes humiliate those who had been wrongly identified, according to a federal complaint filed by the agency.

Rite Aid did not take “reasonable measures” to prevent harm to consumers, the agency said. Its proposed order requires Rite Aid to implement comprehensive safeguards to “prevent these types of harm to consumers when deploying automated systems that use biometric information to track them or flag them as security risks,” according to an FTC press release.

The drugstore giant contracted with two unnamed companies to build its facial recognition database composed of so-called persons of interest — customers Rite Aid believed had committed or tried to commit a crime in its stores — and populated it not only with their pictures but also their names and other information such as year of birth and criminal background data.

The effort was so slipshod that Rite Aid used grainy images drawn from security cameras, employee phone cameras and even news stories to populate its database, which contained tens of thousands of pictures, according to the complaint.

Rite Aid on Tuesday announced it had settled with the agency and said in a statement that it was “pleased to reach an agreement with the FTC and put this matter behind us.”

The company said that while it is “aligned with the agency’s mission to protect consumer privacy,” it disagrees with the allegations in the complaint.

Rite Aid said the FTC’s allegations applied to a “pilot program” used in a “limited” number of stores. The company, which filed for bankruptcy in October, has about 2,000 stores and more than 45,000 employees.

“Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began,” the statement said. “Rite Aid’s mission has always been and will continue to be to safely and conveniently serve the communities in which we operate.”

Surveillance that “hurts people”

FTC Commissioner Alvaro Bedoya on Tuesday published a statement, saying in bold letters at the top: “Biased face surveillance hurts people.”

Bedoya gave examples of the real world harm the surveillance caused, saying that due to a false match an 11-year-old girl was searched, causing her mother to miss work to comfort her traumatized child. He also noted that multiple customers who were wrongly searched and thrown out of stores were with their bosses and co-workers.

The order should be considered a signal that the agency “will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement.

Rite Aid did not tell customers it was using the database and employees were told not to discuss it. The agency said the program disproportionately targeted minorities.

A landmark privacy enforcement

Privacy advocates called the case major and said it could be the first in a series of FTC enforcement actions against companies improperly using biometric data.

“This is a groundbreaking case, a major stride for privacy and civil rights, and hopefully just the beginning of a trend,” John Davisson, the director of litigation at the Electronic Privacy Information Center, said via email.

Davisson said Rite Aid is one of many businesses using unproven techniques and “snake oil surveillance tools” to secretly screen consumers. But he said the FTC has sent a strong message that “algorithmic lawlessness is not an option anymore.”

Bedoya’s statement also warned the private sector to take heed.

“I want industry to understand that this Order is a baseline for what a comprehensive algorithmic fairness program should look like,” he wrote.

“Beyond giving people notice, industry should carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them,” he added.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles

Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.