Instagram to alert parents if teens repeatedly search for harmful content
Instagram will begin notifying parents if their teenage children repeatedly search for terms related to suicide or self-harm on the platform, its parent company Meta announced, Qazinform News Agency reports.
The new feature applies to families enrolled in Instagram’s parental supervision program. Parents in the United States, the United Kingdom, Australia and Canada will start receiving alerts next week, with other regions to follow later this year.
According to Meta, the alerts are designed to inform parents if a teen repeatedly attempts to search for sensitive terms within a short period. The platform already blocks such searches and redirects users to external support resources and helplines.
“Our goal is to empower parents to step in if their teen’s searches suggest they may need support. We also want to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall,” Meta said in a blogpost.
Notifications will be sent via email, text message or WhatsApp, depending on the contact details provided, as well as through the parent’s Instagram account. Parents who receive an alert will also be directed to expert-backed resources on how to approach conversations with their child. The company added that it is also developing similar notifications related to teens’ interactions with artificial intelligence tools.
Earlier, Qazinform reported that Instagram introduced Teen Accounts, aligning content policies with PG-13 standards to create a safer, more age-appropriate experience while expanding parental oversight.