Instagram has announced a new feature that will alert parents if their teenage children repeatedly search for terms related to suicide or self-harm in a short period. This move comes as pressure mounts on governments to implement regulations similar to Australia’s ban on social media use for individuals under 16.
Under this initiative, Instagram, which is owned by Meta Platforms Inc., will notify parents who have opted for supervision settings if their kids attempt to access content related to suicide or self-harm. The alerts will be rolled out starting next week in Canada, the United States, Britain, and Australia.
The platform stated, “These alerts enhance our ongoing efforts to safeguard teens from potentially harmful content on Instagram. We enforce strict policies against content that promotes or glorifies suicide or self-harm.” Instagram’s current policy involves blocking such searches and guiding users towards support resources.
Governments worldwide are increasingly focusing on protecting children from online harm, spurred by concerns like the AI chatbot Grok generating non-consensual sexualized images. Following Australia’s lead in December, Britain is contemplating restrictions to safeguard children online. Countries like Spain, Greece, and Slovenia have also expressed interest in restricting access in recent weeks.
In the UK, measures aimed at preventing children from accessing pornography websites have raised privacy concerns among adults and sparked tensions with the US regarding free speech limitations and regulatory boundaries.
Instagram’s “teen accounts,” designed for users under 16, require parental authorization to adjust settings. Parents can opt for additional monitoring features with their teen’s consent. These accounts also block young users from viewing sensitive content, including sexually suggestive or violent material.
