algorithm
Image: Getty Images / Unsplash

French government uses biased algorithm to detect welfare fraud, rights groups say

Amnesty International and 14 other organizations filed a complaint with France’s highest administrative court on Tuesday, alleging that the French agency dispensing benefit payments is using a discriminatory algorithm to identify welfare recipients committing fraud.

The algorithm is deployed by the French Social Security Agency’s National Family Allowance Fund (CNAF) and is designed to surface overpayments and mistaken benefit payments that go unreturned, Amnesty International said Wednesday in a blog post.

Last year, La Quadrature du Net (LQDN) — one of the organizations appealing to the French court to stop the algorithm’s use —  received what Amnesty International called “versions” of the algorithm’s source code.

The source code was designed by the French government and informs the algorithm that is used to detect benefits fraud, the blog post said.

The organizations behind the complaint allege that CNAF has used the algorithm since 2010 and designed it to focus on people who are low income, jobless, working with a disability and otherwise disadvantaged.

The algorithm assigns recipients receiving family and housing benefits a risk score, ranking them on a scale of zero to one. People scoring closest to one are probed by a CNAF investigator, the blog post said.

“From the outset, the risk-scoring system used by CNAF treats individuals who experience marginalization – those with disabilities, lone single parents who are mostly women and those living in poverty – with suspicion,” Agnès Callamard, secretary general at Amnesty International, said in a statement.

“This system operates in direct opposition to human rights standards, violating the right to equality and non-discrimination and the right to privacy.”

Bastien Le Querrec, a lawyer for LQDN, said the algorithm is a “translation of a policy of relentlessness against the poorest. Because you are precarious, you will be suspect in the eyes of the algorithm, and therefore controlled.”

The investigations often lead to suspensions of benefits payments and reimbursement requirements, LQDN said in a statement posted to its website. 

Some 32 million people in France are members of households receiving CNAF benefits, Amnesty’s blog post said, noting that benefits recipients' sensitive personal data is processed regularly in order to inform the risk score.

While Amnesty did not look at specific instances of the algorithm’s use in France, the blog post said it has probed AI-enabled systems used by Dutch and Serbian authorities and found they do not effectively detect fraud.

In January, the Electronic Privacy Information Center filed a complaint with the Federal Trade Commission alleging that an automated fraud detection program used in 42 states relies on sensitive personal data to search for fraudulent use of unemployment insurance and food stamps. 

The complaint also said many people who qualified for the benefits were denied based on inaccurate predictions made by the algorithm.

In June, the Guardian reported that more than 200,000 people in the United Kingdom faced unjust probes relating to housing benefit fraud as a result of a government algorithm ranking them based on age, gender, number of children and tenancy agreement details.

Do you know about any programs using algorithms to discriminate? Please be in touch if so. Message Suzanne Smalley on Signal, which is end-to-end encrypted, at Suzanne.236 or send an email to [email protected].

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles
Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.