Meta Platforms to Alert Parents if Teens Search Self-Harm Content on Instagram
Meta announced an important notification feature on Instagram on Thursday for parents. They will receive notifications if their children repeatedly attempt to access suicidal content over a short period of time.
Meta announced Thursday that Instagram will introduce new notifications for parents. The aim is to identify red flags in teens’ search queries. Parents will receive notifications if supervised teens are repeatedly trying to access information about suicide or self-harm within a short period of time. This new feature will enhance teen safety measures on Instagram and will first be introduced in some English-speaking countries later this year. Instagram last year set all users under 18 to a strict 13+ account setting that requires parental approval to change.
In its announcement, Meta stated that Instagram will begin alerting parents if their child repeatedly searches for terms related to suicide or self-harm within a short period of time. The company did not specify the time or number of searches. The feature will roll out next week in Australia, Canada, the UK, and the US, and will expand to other regions later this year.
These alerts will only be available to parents who have enabled Instagram's parental supervision tools. Both parents and teens connected through supervision will be notified in advance before the feature goes live. If a teen repeatedly searches for phrases that promote suicide or self-harm, indicate an intent to self-harm, or include similar keywords, a notification will be sent to the parent.
Want to get your story featured as above? click here!
Want to get your story featured as above? click here!
The company further stated that alerts will be sent via email, text message, WhatsApp, or in-app notifications, depending on the contact details provided. Selecting an alert opens a full-screen message that explains the repeated search attempts and provides expert guidance to help parents navigate the conversation.
Meta stated that it already blocks some searches related to suicide or self-harm and redirects users to support resources and helplines. The company stated that it analyzed search behavior and consulted with members of its Suicide and Self-Harm Advisory Group to determine how frequently a search should be performed before notifying parents. It acknowledged that some alerts may be sent even when there is no immediate risk, but stated that this approach aims to strike a balance between caution and avoiding excessive notifications.
Meta added that it will roll out similar notifications for certain teen interactions with its AI tools later this year. These alerts will alert parents if a teen attempts to engage in specific conversations with the company's AI system about suicide or self-harm.