Instagram has announced plans to notify parents if their teenage children repeatedly search for content related to suicide or self-harm in a short period. This initiative comes amid increasing pressure on governments to emulate Australia’s prohibition of social media usage for individuals under 16 years old.
Owned by Meta Platforms Inc., Instagram revealed on Thursday that it will send alerts to parents enrolled in its optional supervision feature when their children attempt to access self-harm or suicide-related material. The alerts are set to commence next week for users in Canada, the United States, Britain, and Australia.
The platform stated, “These alerts complement our ongoing efforts to safeguard teenagers from potentially harmful content on Instagram. We maintain strict policies against content that promotes or glorifies self-harm or suicide.” Instagram’s current protocol involves blocking such searches and guiding users towards support resources.
Governments globally are increasingly prioritizing child safety online, particularly in light of concerns surrounding the AI chatbot Grok, which has been implicated in generating non-consensual sexualized images. Following Australia’s lead in December, Britain expressed intentions in January to explore measures to safeguard children online. Countries like Spain, Greece, and Slovenia have also recently indicated interest in restricting access to certain online content.
Instagram’s introduction of “teen accounts” for individuals under 16 necessitates parental consent for adjusting settings. Parents can opt for additional monitoring features with their teenager’s approval. These accounts prevent young users from encountering “sensitive content,” such as sexually suggestive or violent material.
