Instagram to start alerting parents when children search for terms relating to self-harm
Meta announced Thursday that Instagram will begin to alert parents if their child repeatedly searches the platform for language relating to self-harm and suicide.
The alerts will consider the time frame for the searches and only notify parents when they occur within a short period. The tech giant said it is now building a similar tool that will notify parents about their teens’ conversations with AI about self-harm.
The alerts will only be sent to parents using Instagram’s parental supervision feature in the U.S., U.K., Australia and Canada. The alert system will become available in other parts of the world later this year, Meta said.
The move comes at a time when Meta is under increasing pressure for how its platforms impact young people. Parents who argue that social media use spurred their children to commit suicide have been consistently lobbying Congress for more regulation of Meta for years and have become increasingly prominent as lawmakers have continued to push the Kids Online Safety Act legislation.
Two trials also are currently underway in New Mexico and California in which the company is defending itself against charges that it addicts teens to social media, creates anxiety and is a breeding ground for sexual predators.
On February 18, CEO Mark Zuckerberg was grilled by plaintiffs’ lawyers in the California case who alleged that he intentionally designed Instagram to be addictive.
In announcing the new notification system, Meta emphasized that most teens do not look for information about self-harm on the Instagram platform.
“The vast majority of teens do not try to search for suicide and self-harm content on Instagram, and when they do, our policy is to block these searches, instead directing them to resources and helplines that can offer support,” the blog post announcing the new system said.
‘Erring on the side of caution’
Searches that will trigger an alert could include “phrases promoting suicide or self-harm” and not only the terms themselves, the blog post said.
Notifications will be sent via WhatsApp, email or text and parents tapping on the alerts will receive advice about how to discuss self-harm and suicide with their child.
Meta said it is planning to strike a careful balance and will not send alerts lightly since too many notifications could make them “less useful overall.” It said that its work informing when to trigger the alerts was shaped by an analysis of Instagram search behavior and advice from its Suicide and Self-Harm Advisory Group.
“We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution,” the blog post said. “While that means we may sometimes notify parents when there may not be real cause for concern, we feel — and experts agree — that this is the right starting point, and we’ll continue to monitor and listen to feedback to make sure we’re in the right place.”
Suzanne Smalley
is a reporter covering digital privacy, surveillance technologies and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.



