Facial recognition’s latest foe: Italian knitwear
At first glance, the sweater looks like something from The Cosby Show: colorful swirls, crazy textures, a sort of abstract collage of greens, reds, yellows. But his knitwear has a secret mission: fooling facial recognition software.
Rachele Didero, the founder of Italian fashion tech startup Cap_able wanted her clothing and designs to “have a function” beyond fashion. And the resulting Manifesto Collection is a line of sweaters, hoodies, T-shirts and pants that are all part of an experiment in adversarial artificial intelligence. She is trying to create a blind spot in those all-seeing facial recognition systems that have become a fixture for surveilling public spaces the world over.
Because AI learns by processing millions of images so it can identify things in the real world, Didero has created patterns that make the technology misidentify things. "What these garments are doing is confusing the algorithm" with patterns it isn't expecting, she said.
And with her clothing line, Didero has created something of an AI head fake — hiding patterns and shapes to trick the AI into misidentifying humans.
The idea grew out of a conversation between Didero and some of her friends at the Fashion Institute of Technology, a design school in New York. They were talking about how much facial recognition software was expanding and whether they might do something to address it.
Turns out, one of her friends was an engineer from India, and he had been working on a kind of adversarial patch designed to intentionally fool AI programs into seeing something that’s not there.
“And then I was like, Okay, maybe we can do something together,” she said.
They created a line of clothing full of hidden animals, people, and other distracting shapes that are like bright shiny objects that facial recognition algorithms glom onto.
The clothing line is trying to get facial recognition to identify “zebras, elephants, giraffes or dogs,” Didero said.
To see how effective the designs really were, Cap_able tested the clothing with a deep learning algorithm called YOLO, which identifies and classifies objects. The Manifesto Collection isn’t foolproof. Didero said her clothing has around a 60% success rate with YOLO, meaning the software identifies a giraffe, say, but not a human face.
“These adversarial and digital images are keeping all the attention from the algorithm,” Didero said. “It’s like it's attracted by these colorful patterns, and it sees something that’s not there.”
Adversarial design
Traditionally, adversarial AI experiments have sought to improve AI, not fool it.
Back in 2017, a University of California, Berkeley professor named Dawn Song worked out a way to convince a self-driving car that a stop sign wasn’t a stop sign after all. By placing stickers and tape in precise places on the stop sign, she was able to convince the car’s image classifier that it was a “45 mph speed limit” sign instead.
“We wanted to see whether an attacker can actually manipulate a physical object” in such a way that the AI could be misled, Song said.
And it worked. The experiment confirmed Song’s worst fears about AI and the many ways adversaries might be able to exploit its vulnerabilities.
If AI is going to be used in the real world, Song reasoned, it needs to be tested. It needs to be challenged. And it needs to be resilient.
“We want to show that these models actually still have huge deficiencies,” Song said, especially in cases that could put people in physical danger.
As the market for facial recognition technology booms, so does an industry working to subvert it.
Didero sees Cap_able’s clothes as just one tool in a broader fight for privacy.
“I'm giving the possibility to people not to be recognized every moment by this technology,” she said.
On Thursday, five U.S. senators called on the Transportation Security Administration to stop its use of facial recognition in airports, pointing to privacy concerns and a track record of disproportionately misidentifying Asian and African American people.
“American’s [sic] civil rights are under threat when the government deploys this technology on a mass scale, without sufficient evidence that the technology is effective on people of color and does not violate Americans’ right to privacy,” they wrote.
Will Jarvis
is a podcast producer for the Click Here podcast. Before joining Recorded Future News, he produced podcasts and worked on national news magazines at National Public Radio, including Weekend Edition, All Things Considered, The National Conversation and Pop Culture Happy Hour. His work has also been published in The Chronicle of Higher Education, Ad Age and ESPN.