Report: Ableism is already embedded in surveillance tech
Surveillance technology used in a range of industries, including policing and education, often rely on algorithms that leave disabled people at heightened risk of discrimination, warns a recent report from the Center for Democracy and Technology.
The study reviews recent developments in automated decision-making and surveillance tech across health care, employment, policing and education — which lead author and CDT policy counsel Lydia X. Z. Brown told The Record only became more ingrained in our lives during the pandemic.
“Algorithms are inescapable,” Brown said. “As a result, existing practices and patterns of discrimination are therefore being replicated, exacerbated and accelerated by the use of algorihmic technology.”
The pandemic will likely lead to a spike in people who experience disability: A recent Centers for Disease Control and Prevention study reported that 1 in 5 adult survivors of Covid-19 appear to develop long-term health issues.
And the disease alone is not the only disabling aspect of the pandemic.
“A lot of people who are now experiencing disability — because they’ve had and survived Covid-19, or because they’ve experienced long term mental health effects over the pandemic, or they’ve experienced the trauma of racial terrorism, or interpersonal violence from being longer depedent and stuck around their abusers — may not understand or realize they could count legally as being disabled,” Brown said.
But whether people are aware of their disabled status or not, they can still be harmed for it.
Surveiliance technologies and the algorithms that operate them can directly discriminate when processing data related to disabilities, but there can also be proxy discrimination where information is analyzed in a discrminatory way, according to Brown.
For example, many schools turned to remote proctoring systems to monitor if students cheated at online exams — some of which rely on algorithms to flag potential signs of academic dishonesty.
However, behaviors banned or considered suspicious by those systems may also be signs of protected disabilities, such as irregular eye movements or staring off-screen and needing to take frequent bathroom breaks, Brown said.
In some cases, the knowledge that a student is being closely watched could also exacerbate health conditions such as anxiety or post-traumatic stress disorder, they explained.
Disabled people already face a variety of economic barriers and are twice as likely to be unemployed as non-disabled people, according to the U.S. Bureau of Labor Statistics. However, systems of surveillance in the workplace, such as automated productivity monitoring software, may punish disabled people who may need things like schedule flexibility to perform job duties.
Depending on how they are deployed, those systems may also put companies on the wrong side of both general labor and disability rights law — although enforcement is inconsistent.
“We do have a robust array of disability rights laws, federally and in many states,” explained Brown. “However, simply because those rights exist does not mean that every disabled person knows how to enforce their rights, is comfortable with disclosing a disability and requesting an accommodation, or is even aware that they are being discriminated against.”
Some parts of the government are now working to provide more clarity on these issues.
In mid-May, the Justice Department’s Civil Rights Division announced new guidance on ways artificial intelligence and algorithm use in hiring tools could violate the Americans with Disabilities Act.
Andrea Peterson
(they/them) is a longtime cybersecurity journalist who cut their teeth covering technology policy at ThinkProgress (RIP) and The Washington Post before doing deep-dive public records investigations at the Project on Government Oversight and American Oversight.