Ofcom has published its Code of Practice for illegal content as part of the implementation of the Online Safety Act. Here, Molly Rose Foundation Chief Executive Andy Burrows sets out his concerns with the codes.
This publication should have been a defining moment for children and families. Yet six and a half years after the UK Government first committed introduce online safety legislation, the regulator’s proposals were a bitter disappointment.
At Molly Rose Foundation we’ve always said that Ofcom should be judged by a single overarching objective – do its proposals do everything possible to tackle inherently preventable online harm?
Having gone through the regulator’s proposals, we’re afraid to say that Ofcom has badly failed that test.
- Ofcom’s Codes demonstrate a clear and palpable lack of ambition
Above all else, Ofcom’s task was to move fast and fix things. However, instead of strong and ambitious regulation, the announcement demonstrated an unjustifiable lack of assertiveness.
Ofcom seems determined to afford itself the luxury of gradualism when responding to online harms that require an urgent and commitment step change in response. ‘This is just the start’ was a line that was repeated time and again by Ofcom on its broadcast rounds – but when we’re losing a young person aged 10-19 by suicide every single week, iteration is a technocratic approach that children and families can scant afford.
Ofcom could and should have done more to tackle inherently preventable online harm. Yet its failure to grasp the nettle means that harm will now continue to flourish, with only a sticking plaster approach to deep and sustained systemic failures.
This harm is now taking place on the regulator’s, and this Government’s, watch.
- The regulator has failed to understand many of the harms in scope
We are dismayed that Ofcom has failed to substantially update its register of risks to reflect a detailed and meaningful grasp on suicide and self-harm content that reaches the criminal threshold.
When Ofcom first consulted on its draft proposals for a year ago, it was clear that its understanding of the issues fell short. It is inexcusable that its final register of risks has still not adequately addressed this.
The reality of suicide and self-harm on online services is that we have a suicide forum that has taken more UK lives than were lost at Grenfell – at the last count at least 97 deaths have been linked to the forum, including those of five children. The regulator fails to recognise this, and instead makes reference to research conducted on chat rooms between 2001 and 2008.
Separately, Molly Rose Foundation made the regulator aware as early as February of a disturbing new trend in which young people are being groomed on tech platforms for the purposes of coercing them into performing acts of self-harm, often on livestreams.
This disturbing new trend has caused sufficient alarm that the FBI and law enforcement agencies in Canada have both issued public advisories. However, Ofcom has chosen not to even include this in its assessment of the likely risks.
- The Code of Practice is insufficient and its recommended measures lack rigour
We are astonished that Ofcom’s codes fail to include a single targeted measure for social media platforms to tackle suicide and self-harm content that reaches the criminal threshold, despite the offence of assisting suicide being listed as a Priority Offence on the face of the Bill.
Ofcom’s set of generic recommended measures are generally weak or insufficient – featuring largely ‘going through the motions’ proposals on content moderation, risk assessment and management functions.
Perhaps more surprisingly, Ofcom has not only opted not to introduce a single new recommended measure in its codes, but in a number of key areas it has also watered down its draft proposals, including in respect of child sexual abuse.
It also appears that the regulator has set out a potential loophole which could readily be used by platforms to try and dodge their compliance responsibilities – with online services now being able to dodge the requirement to have swift and effective content moderation functions if they can successfully claim ‘it is currently not technically feasible’ for them to achieve this outcome.
That sound you hear is platform lawyers and compliance teams rolling up their sleeves to gain this poor drafting, and to be able to claim – thanks to the Act’s ‘safe harbour’ provisions – that they are fully compliant while doing so.
- Insufficient risk assessments
While the Act takes a largely systemic approach, with platforms being required to risk assess their products and then take proportionate steps to address harms, much of the detail set out by Ofcom risks weakening the effectiveness of this overall approach.
As was the case with its draft proposals, Ofcom still seems to envisage the risk assessment process as a primarily desk-based exercise.
The regulator still treats algorithmic safety testing as an enhanced rather than a core input to a risk assessment exercise, meaning that platforms are expected to undertake a ‘suitable and sufficient’ risk assessment, when in practice this means that some of the most important inputs are seen only as an optional extra.
Ofcom’s code likewise requires platforms only to undertake safety testing where a service is medium or high risk for at least two types of illegal content, and where the platform already undertakes pre-existing tests on its recommender systems. This is the online safety equivalent of requiring a mechanic to lift up the bonnet during an MOT only because they were already planning to do so.
Finally, Ofcom’s risk assessment guidance more explicitly bakes-in that platforms should be primarily referring to its codes of practice when looking to mitigate potential risks. While its guidance does recognise that its recommended measures ‘may not eliminate all of the risks […] identified’, the lack of emphasis on putting in place additional measures further waters down the systemic approach that Parliament envisaged, and it takes us further away from the risk-based, outcome focused approach we had once expected.
Next steps
Ofcom has promised that 2025 will be a year of action. Instead, we head into next year deeply pessimistic about the regulator’s lack of ambition, and with grave concerns about the lack of enforceable measures that the regulator has built into its codes.
Ofcom’s lack of ambition means that preventable harm will continue to flourish.
The regulator’s approach has starkly exposed deep structural issues with the Act. As it stands, the Act’s safe harbour provisions mean some large platforms could counter productively scale back their existing largely ineffective and deficient safety measures while still being able to claim compliance with online safety regulation.
Ofcom’s approach to categorising high-risk services, endorsed by the Secretary of State, means that the regulator will take longer to build a case that high-risk, non-compliant suicide forums should be blocked from the UK. This delay will almost certainly mean more lost lives, including those of young people.
It’s now abundantly clear that the regulator is not prepared to take the ambitious approach that is required, and that the structural design of the legislation is a significant impediment to tackling inherently preventable harm.
There can be no more grounds for further delay or for failing to fix structural issues which will impede necessary action. Ofcom must substantially step up its game, and the Prime Minister and Secretary of State should urgently commit to a new Act that can fix and strengthen the regulatory regime, because the tragedy is further delay will cost young lives.
If you’re struggling just text MRF to 85258 so you can speak to a trained volunteer from Shout, the UK’s Crisis Text Line service.