
- Our new research finds algorithms continue to recommend suicide, self-harm and depression content at scale
- Instagram and TikTok accused of gaming the Online Safety Act as legislation comes into force
- Experts warn the Online Safety Act is not ambitious enough to tackle the scale of harm taking place on social media
- Ian Russell calls on Prime Minister to act urgently to strengthen legislation and address the preventable harm happening on his watch
TikTok and Instagram were recommending industrial levels of harmful suicide and self-harm content to teens just weeks before the Online Safety Act came into effect, actively putting young lives at risk.
Almost eight years on from the death of Molly Russell, our new research found algorithmically driven depression, suicide and self-harm content being recommended at a vast scale to accounts opened as a 15-year-old-girl.
Ian Russell is warning that Ofcom’s timid implementation of the Online Safety Act is a sticking plaster in the face of this disturbing level of preventable harm, and the Prime Minister must strengthen the legislation to prevent further young lives being lost.
On teenage accounts which had engaged with suicide, self-harm and depression posts, the research shows algorithms continue to bombard young people with a tsunami of harmful content on Instagram Reels and TikTok’s For You page.
The report found:
- Almost all of the recommended videos watched on Instagram Reels (97%) and TikTok (96%) were found to be harmful: bombarding teens with harmful content in a similar way to what happened to Molly.
- Over half (55%) of recommended harmful posts on TikTok’s For You Page actively contained references to suicide and self-harm ideation and 16% referenced suicide methods: recommended videos included posts that promoted and glorified suicide, referenced suicide methods and normalised intense feelings of misery and despair.
- Harmful content is achieving deeply disturbing levels of reach: one in ten harmful videos on TikTok’s For You Page had been liked at least a million times, and on Instagram Reels one in five harmful recommended videos had been liked more than 250,000 times.
- New high-risk features on TikTok’s For You Page make it even more likely teenagers can discover rabbit holes in a single click: for example, new AI generated search prompts shown alongside recommended content even introduced researchers to new suicide methods.
Conducted in the weeks leading up to the implementation of the Online Safety Act, the research found both platforms to be gaming Ofcom’s new rules.
While both platforms had enabled teenagers to offer negative feedback on content being recommended to them, as required by Ofcom, they had also provided an option to be recommended more harmful content – including suicide and intense depression posts.
The report, Pervasive-by-design, was produced in partnership with The Bright Initiative by Bright Data and found that the material recommended by both TikTok and Instagram would have the same harmful impact as content which Molly Russell saw before her death in 2017.
An inquest heard Molly saw 2,100 suicide, self-harm and depression posts on Instagram alone in the six months before she died and concluded that social media contributed to her death.
The suicide and self-harm content interspersed with material showing depression and misery points to cumulative harmful posts seen at high frequency that can send young people down rabbit holes of despair.
Previous research was carried out in 2023 and today’s report suggest things remain unchanged or have gotten worse, particularly on TikTok.
The research found the business models of Big Tech that incentivise engagement over safety with high risk design features are putting young people at incredible risk of harm.
The report comes as Ofcom begins implementing their children’s safety codes under the Online Safety Act claiming to stand ready to “tame toxic algorithms”.
However, Molly Rose Foundation has concerns the measures are palpably not strong enough to stand up to the level of harm uncovered by this report, with companies recommended to spend just £80,000 to fix the algorithms that cost Molly’s life.
We are calling for the Government to step in and introduce a strengthened Online Safety Act that ensures companies have to address and mitigate all the risks young people are exposed to on their platforms.
Ian Russell, Chair of Molly Rose Foundation, said: “It is staggering that eight years after Molly’s death incredibly harmful suicide, self-harm and depression content like she saw is still pervasive across social media.
“Ofcom’s recent child safety codes do not match the sheer scale of harm being suggested to vulnerable users and ultimately do little to prevent more deaths like Molly’s.
“For over a year, this entirely preventable harm has been happening on the Prime Minister’s watch and where Ofcom have been timid it is time for him to be strong and bring forward strengthened, life-saving legislation without delay.”
Andy Burrows, Chief Executive of Molly Rose Foundation, said: “Harmful algorithms continue to bombard teenagers with shocking levels of harmful content, and on the most popular platforms for young people this can happen at an industrial scale.
“It is shocking that in the two years since we last conducted this research the scale of harm has still not been properly addressed, and on TikTok the risks have actively got worse.
“The measures set out by Ofcom to tackle algorithmic harm are at best a sticking plaster and will not be enough to address preventable harm. It is crucial that the Government and regulator act decisively to bring in much stronger measures that platforms cannot game or ignore.”
The research also found how platforms are profiting from large companies advertising adjacent to harmful posts.
Researchers were shown advertising adjacent to harmful material for one of every 9.5 For You Page posts watched consecutively on TikTok, and for one in every ten Reels watched on Instagram.
These included major fashion retailers popular with teenagers, fast food brands and UK universities.
Harriet Kingaby, Conscious Advertising Network Co-Founder, said: “This report exposes the shocking harmful content young people are exposed to on social media platforms, and highlights the role of advertising in incentivising its distribution.
“Platforms are profiting off this content, and advertisers are often unknowingly helping to fund it. The need for transparency could not be clearer: advertisers need to take control of their media supply chains.
“Advertisers cannot shy away from the role they play in ensuring young people remain safe online.”
Gregor Poynton MP, Chair of the APPG on Children’s Online Safety, said: “This damning report highlights how social media companies are still unforgivably pushing the most devastating harmful content to children as the Online Safety Act comes into force.
“It is crucial that Ofcom’s response lives up to the scale of the harm children are facing online but the regulator is currently falling short.
“We urgently need to tackle these issues head on and parents will rightly judge us by whether we do everything possible to protect their children from this pervasive and preventable harm.”
The report makes a series of recommendations to platforms, regulators, government and advertisers.
These include:
- Social media platforms must no longer allowed to pay lip service to safety-by-design: designers and engineers remain actively incentivised to introduce increasingly insidious new engagement-based features that will inevitably deepen and extend the risk profile for young people.
- Ofcom must substantially revisit the scope and ambition of its regulatory approach: the regulator’s ambition is dwarfed by the magnitude and persistence of high-risk design features, underpinned by platform business models that actively prioritise user engagement over safety.
- The findings must increase pressure on the UK Government to strengthen the Online Safety Act: the Technology Secretary has described the Act as ‘uneven and unsatisfactory’, but the Government has been slow to commit to fixing systemic weaknesses in the Act’s design that means its impact is currently curtailed.
- Advertisers must be prepared to step up and do more to ensure that social media platforms are made fundamentally safer-by-design: advertisers must be prepared to use their substantial leverage to demand better from tech companies – managing reputational risks for them, while simultaneously delivering meaningful improvements for young people.
Or Lenchner, CEO of Bright Data said: “We are pleased to have provided the tools for The Molly Rose Foundation to complete its report on social media and adolescent harm for a second time.”
“Bright Data’s tools and marketplace datasets make public web data accessible, and help the MRF to achieve its mission by holding social media platforms accountable and making the internet safer.”
If you’re struggling just text MRF to 85258 so you can speak to a trained volunteer from Shout, the UK’s Crisis Text Line service.