Instagram's "Teen Safety" Shield Still Full of Holes, Warn Campaigners

April 23, 2025
Instagram's "Teen Safety" Shield Still Full of Holes, Warn Campaigners

Instagram's "Teen Safety" Shield Still Full of Holes, Warn CampaignersThe 5Rights Foundation has released a damning report revealing critical flaws in Meta's safeguards, just as the UK's communications regulator, Ofcom, prepares to publish its crucial children's safety codes under the Online Safety Act.

The 5Rights Foundation's investigation casts serious doubt on the effectiveness of Instagram's Teen Accounts, launched in September 2024 with the promise of enhanced privacy and content control, offering "peace of mind for parents." Researchers were able to easily bypass age restrictions by creating multiple fake accounts using false birthdates, highlighting a fundamental weakness in the platform's age verification process.

Shockingly, these newly created "Teen Accounts" were immediately bombarded with problematic content. The report details how the fake profiles were shown sexualised imagery, subjected to hateful comments, and, most troublingly, recommended adult accounts to follow and message. This directly contradicts Meta's claims that the new accounts limit contact and the type of content young people encounter.

The campaigners argue that Instagram's algorithms remain a significant danger to children. Their research suggests these algorithms continue to promote "sexualised imagery, harmful beauty ideals, and other negative stereotypes" to teen users. Furthermore, the "Teen Accounts" were reportedly shown posts saturated with "significant amounts of hateful comments," creating a toxic online environment.

Beyond harmful content, the 5Rights Foundation also raised concerns about the addictive nature of the platform and the constant exposure of young users to sponsored and commercialised content, often without clear indicators.

Baroness Beeban Kidron, the founder of 5Rights Foundation, delivered a scathing assessment of Meta's efforts, stating, "This is not a teen environment. They are not checking age, they are recommending adults, they are putting them in commercial situations without letting them know, and it's deeply sexualised." Her words underscore the urgent need for more robust and effective safety measures.

Meta, in response to the research, maintained that the Teen Accounts "provide built-in protections for teens, limiting who's contacting them, the content they can see, and the time spent on our apps." The company also stated that "teens in the UK have automatically been moved into these enhanced protections, and under-16s need a parent's permission to change them." However, the findings of the 5Rights Foundation directly challenge these assurances, exposing a disconnect between Meta's claims and the reality experienced by users.

This critical report comes at a pivotal time, with Ofcom poised to publish its children's safety codes. These codes will outline the specific rules that online platforms must adhere to under the Online Safety Act to protect children. Platforms will then have a three-month window to demonstrate that they have implemented effective systems, including robust age checks, safer algorithms that avoid recommending harmful content, and efficient content moderation practices. The findings from 5Rights Foundation will undoubtedly add significant pressure on Ofcom to set stringent standards and hold platforms like Instagram accountable.

In a separate but related development highlighting the broader challenges of online safety for young people, BBC News has uncovered the existence of numerous self-harm groups, known as "communities," on the platform X (formerly Twitter). These groups reportedly contain tens of thousands of members who share graphic images and videos of self-harm. Disturbingly, some users within these groups appear to be children, raising serious safeguarding concerns.

Becca Spinks, an American researcher who discovered these X groups, expressed her shock, stating, "I was absolutely floored to see 65,000 members of a community. It was so graphic; there were people in there taking polls on where they should cut next." Despite being approached for comment, X did not respond to the BBC's inquiries. However, in a submission to an Ofcom consultation last year, X stated, "We have clear rules in place to protect the safety of the service and the people using it" and affirmed its commitment to complying with the Online Safety Act in the UK.

The revelations from both Instagram and X underscore the persistent and complex challenges of ensuring children's safety online. As Ofcom prepares to enforce the new regulations, the spotlight will be firmly on social media giants to prove that their platforms are genuinely safe spaces for young users and that their promises of protection are more than just superficial features. The well-being of children online demands nothing less than robust, independently verifiable safety measures and a genuine commitment from platforms to prioritize their young users' welfare over engagementand profit.

The latest revelations regarding Instagram's failure to adequately protect young users will undoubtedly fuel the deep-seated anxieties of British Muslim, British South Asian, and British Bangladeshi guardians and parents. Rooted in strong cultural values emphasizing family honor, modesty, and the protection of children, these communities often hold heightened concerns about the types of content their children are exposed to online. The ease with which researchers bypassed age restrictions and the subsequent exposure to sexualized content, hateful comments, and recommendations of adult accounts will be particularly alarming. Parents from these backgrounds may feel a profound sense of betrayal by a platform that promised safety but seemingly delivers their children into potentially harmful digital spaces, jeopardizing their well-being and contradicting the values they strive to instill. The addictive nature of the app and the exposure to potentially exploitative commercial content will further compound these worries, reinforcing the need for stricter regulations and greater accountability from social media giants.