
- Molly Rose Foundation writes to Ofcom boss Melanie Dawes urging the regulator to hold Meta to account
- Meta chatbots found to have serious design flaws that put children’s safety at risk
- Charity says flaw exposes serious failings in Meta’s risk assessments under the Online Safety Act
Ofcom has been urged to investigate Meta over its AI chatbots which allowed for romantic or sensual conversations with children.
The move comes following reports which found internal guidelines at the company suggested chatbots permitted provocative and harmful conversations with young people.
Meta has since announced a set of interim changes which it is thought will mean the platform no longer trains its chatbots to engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate sexual and romantic conversations.
But we’ve written to the regulator’s Chief Executive Melanie Dawes claiming the company may have breached its risk assessment duties under the Online Safety Act by only taking action to address foreseeable risks after media coverage and political pressure, when these risks should have been resolved earlier under UK law.
Under the Online Safety Act, platforms were due to complete risk assessments for content harmful for children by July 25th, but this appears to have been a box ticking exercise when such major gaps exposed by the flaws in Meta’s chatbots have been missed.
The letter read: “This case raises broader and substantive concerns about the approach that Meta is taking to comply with the letter and spirit of UK regulation, with many observers legitimately able to question whether anything has meaningfully changed in Menlo Park, amidst a prevailing culture which continues to view safety-by-design and regulatory compliance as a tick-box exercise, rather than a first order concern.”
We also asked Ofcom why so far none of the regulator’s 40+ investigations under the Online Safety Act were not directed at large platforms owned by some of the biggest tech companies.
Andy Burrows, Chief Executive of Molly Rose Foundation, said: “It is shocking that Meta failed to identify the potential harm caused by its chatbots in its risk assessment and this leaves questions regarding what else has been overlooked.
“The fact Meta has rolled out new safety measures only after media pressure suggests there is a major flaw in their decision making, and suggests the company is still largely paying lip service to regulation.’
“Ofcom must now act to hold companies to account for the robustness of their risk assessments and ultimately for making their products safe for children. They can signal their intention to do so by launching an investigation into Meta without delay.”
You can read the full letter here.
If you’re struggling just text MRF to 85258 so you can speak to a trained volunteer from Shout, the UK’s Crisis Text Line service.