Instagram Teen Accounts fail to protect children, first-of-its-kind testing of safety tools reveals

Molly Rose Foundation - Instagram Teen Accounts fail to protect children, first-of-its-kind testing of safety tools reveals
Instagram Teen Accounts fail to protect children, first-of-its-kind testing of safety tools reveals
  • Research from Meta whistleblower Arturo Béjar, NYU/Northeastern University academics and UK/US child safety groups finds Meta has broken the safety promises it made about Instagram Teen Accounts
  • Two thirds (64%) of safety tools tested found to be ineffective with just 17% working as described by Meta – leaving children at risk of harmful content and abuse
  • Meta urged to fix safety of Teen Accounts and warned that regulators and governments must urgently step in

Instagram’s Teen Accounts are abjectly failing to keep young people safe despite Meta’s PR claims, a major new report has revealed.

A major, systematic review of Instagram’s list of teen safety features found that less than 1 in 5 are fully functional and two-thirds (64%) are either substantially ineffective or no longer exist.

The report is a result of a landmark partnership between civil society and academia in the US and UK and was conducted by Meta whistleblower Arturo Béjar, Molly Rose Foundation, Fairplay, ParentsSOS and Cybersecurity for Democracy, based out of NYU and Northeastern University.

The analysis dramatically undermines Meta’s claims that its often-referenced safety features adequately protect teens on Instagram and suggests that Teen Accounts give false reassurances to parents about much heralded safety measures on the platform.

Among the disturbing findings are:

  • Users of Teen Accounts able to view content that promotes suicide, self-harm and eating disorders, with autocomplete suggestions actively recommending search terms and accounts related to suicide, self-harm, eating disorders and illegal substances.
  • Instagram’s algorithm incentivises children under-13 to perform risky sexualised behaviours for likes and views and encourages them to post content that received highly sexualised comments from adults.
  • Teen Accounts able to send and receive grossly offensive and misogynistic comments and messages to one another without the promised interventions by the platform.
  • Teen Accounts can view content that features sexual descriptions and posts that describes demeaning sexual acts.
  • Test accounts were algorithmically recommended Reels featuring children as young as 6, and found many public accounts under 13 using Instagram features to announce their age.

The report urges Meta to fix the tools and for regulators and governments to respond robustly to the company’s false claims that Instagram Teen Accounts are safe for young people.

Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors finds that promises to parents about the safety of Teen Accounts fail to match reality and that the flawed introduction of these safety features have put young people at risk.

The analysis shows that Teen Accounts can still  receive inappropriate contact from adults, are encouraged to connect with adult strangers through follow suggestions, and that the tools to effectively manage how teens spend their time or to curb compulsive use are substantially ineffective.

The research found that Instagram’s own design features undermine the effectiveness of their own safety tools. For example, teens are rewarded with a sea of emojis for selecting Disappearing Messages, suggested adult strangers, and auto-complete recommended search terms and accounts related to eating disorders, suicide and self-harm.

The testing undertook by Arturo Béjar and Cybersecurity for Democracy analysed 47 safety tools. Each tool was given a red, yellow or green rating.

Thirty of the tools were given a red rating as being non-existent or ineffective. Nine reduced harm but came with limitations and were given a yellow rating. Just eight tools were rated green and found to be fully functional.

The report makes a series of proportionate recommendations for Meta to fix these issues and urges regulators to act decisively investigate and fix the issues and for better regulation in the US and UK.

Campaigners urged lawmakers in the US to pass the Kids Online Safety Act, for the UK Government to strengthen the Online Safety Act and for Ofcom to act with greater urgency against failings by large tech companies.

Andy Burrows, Chief Executive of Molly Rose Foundation, said: “This report exposes systematic failures in Meta’s Teen Accounts and must be a wake-up call to governments, regulators and parents.

“Our findings suggest that Teen Accounts are a PR driven performative stunt rather than a clear and concerted attempt to fix long running safety risks on Instagram.

“These failings point to a corporate culture at Meta that puts engagement and profit before safety and pays lip service to doing the right thing.”

Arturo Béjar, Meta Whistleblower, said: “Meta consistently makes promises about Teen Accounts, consciously offering peace of mind for parents by seemingly addressing their top concerns including that Instagram protects teens from sensitive or harmful content, inappropriate contact, harmful interactions, and gives control over teen’s use.

“However, through testing we found that most of Instagram’s safety tools are either ineffective, unmaintained, quietly changed, or removed. And, because of Meta’s lack of transparency, who knows how long this has been the case, and how many teens have experienced harm in the hands of Instagram as a result of Meta’s negligence and misleading promises of safety, which create a false and dangerous sense of security.

“Parents should know, the Teen Accounts charade is made from broken promises.  Kids, including many under 13, are not safe on Instagram. This is not about bad content on the internet, it’s about careless product design. Meta’s conscious product design and implementation choices are selecting, promoting, and bringing inappropriate content, contact, and compulsive use to children every day. Parents alone can’t protect their kids. The conversation that parents need to have with their kids is not about peace of mind, it is about what their kids should do when they get self-harm, or eating disorder recommendations, or experience harassment and unwanted contact or sexual advances, or when they just can’t put their phone down.”

Fairplay Executive Director Josh Golin said: “More than 80% of Instagram’s safety tools for teens do not work as advertised, making it clear that Meta’s goal is to forestall regulation, not actually protect vulnerable young people.

“Enough is enough. Congress must pass the bipartisan Kids Online Safety Act now, and the Federal Trade Commission should hold Meta accountable for deceiving parents and teens.”

ParentsSOS Co-Founder Maurine Molak said: “Last year, Mark Zuckerberg publicly apologized to me and my fellow survivor parents, saying no one should ever have to experience our suffering.

“But since then, Zuckerberg and his fellow executives have given parents broken safety tools and broken promises with Instagram Teen Accounts, all but guaranteeing that more parents will lose their children to preventable online harms.

“This report makes clear that Meta cannot be trusted. It’s time for Congress to pass the Kids Online Safety Act to force Instagram and other online platforms to put our children’s safety first.”

Cybersecurity for Democracy Co-Director Dr. Laura Edelson said: “Currently, the user-facing safety tools we studied in this report are the best solutions Instagram has offered to protect young users from the risks their products pose. As this report shows, these tools have a long way to go before they are fit for purpose. However, the robust scenario testing we performed can help us not only understand which of these tools aren’t working, it can help us understand how to fix them. It’s my hope that in addition to helping the public understand the current state of Instagram’s user safety tools, this report will also highlight the usefulness of scenario testing for a wide range of independent research and testing.”

Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors is available here.

If you’re struggling just text MRF to 85258 so you can speak to a trained volunteer from Shout, the UK’s Crisis Text Line service.