I want to briefly call out something which took place in yesterday’s Parliamentary debate about the Joint Committee Report into the Online Safety Bill.
This post deals with a discussion about content pertaining to self-harm and suicidal ideation. The Samaritans are always there to help on 116 123.
For the sake of this discussion, I’m going to depersonalise the topic in question to a more generic $badthing, and I’m going to depersonalise the platform in question to (Global Collaborative Encyclopaedia), because this discussion needs to be seen from two miles up.
In the debate, the Chair of the APPG on suicide and self-harm prevention called out (Global Collaborative Encyclopaedia) as a platform which is not currently in the scope of the Online Safety Bill, but in her view should be:
I have no doubt that she is speaking from the heart and that she’s fully qualified to discuss the topic, because in the course of her work, she will have seen content that none of us should ever have to see. But where she erred, in the context of the implications for the Online Safety Bill, is by suggesting that
1) The entire (Global Collaborative Encyclopaedia) must be brought into the scope of the regulation, because of this one topic;
2) Because, by implication, a site which says
$badthing exists, here is information about $badthing
is, in fact, saying
$badthing exists! go do $badthing! here’s how to do $badthing.
This leap of logic comes as no surprise to anyone who has worked on the Online Safety Bill since it was in its infancy three years ago. In an attempt to make the UK the “safest place in the world to be online”, content and speech are viewed as trip hazards which can be nailed down.
In practice, that means that the presentation of factual information about $badthing is conflated with incitement to do $badthing.
And the site where that factual information resides – in this case, (Global Collaborative Encyclopaedia), is viewed as complicit in that incitement.
(As if people looking for information on how to commit $badthing – as opposed to merely accessing factual content on it – won’t simply go elsewhere to find what they are looking for. In other words, you push people further underground, into darker places. And those places, I promise you, won’t care a jot about Britain’s patriotic objectives for the global internet.)
If this Bill is going to demand that $sites must remove impartial and factual content about $badthing, on the grounds that information = incitement, that would open the door to restricting the reach, or even taking down, any impartial and factual content about any $badthing.
That means my own past, and my own history, have already been labelled a trip hazard to my own self.
Those are the issues that this could raise for freedom of expression. The next matter is proportionality:
Is it reasonable, necessary, and yes, proportionate to demand that $site be brought into the scope of the UK’s domestic content moderation law, because of one page or one family of informational pages about $badthing?
Are there mitigations that could be applied to that content?
In this specific case, there are: the Samaritans’ guidelines for industry on how sites should manage, and moderate, content dealing with self-harm and suicidal ideation. I helped review the V1 draft several capacities ago and can confirm that this is some of the most well-thought out work I’ve ever seen on content moderation.
Those mitigations could help prevent against the whole of (Global Collaborative Encyclopaedia), its contributors, and the content on it being swept into complex and censurious regulation, for the sake of one topic.
Could (Global Collaborative Encyclopaedia) do more? Probably. I don’t know. I don’t work for them. Is there scope for government to work collaboratively with them to help them do more, without trying to draft a law around specific topics, sites, and yes individuals, to the demerit of everyone involved?
Well, that depends on a deeper question, because there’s one last thing to consider:
Global Collaborative Encyclopaedia is not the UK’s social safety net.
$platform is not the UK’s social safety net.
Intermediary liability law is not a social safety net.
It is not the job of sites, platforms, or projects to save people from themselves.
It is not the job of sites, platforms, or projects to substitute for families, NHS, educators, employers, local communities, mental health services, and the justice system, as the first and only line of defence against self-harm and suicide and its consequences.
And it is not the job of sites, platforms, or projects to be held at fault when all of those safeguards have failed.
Particularly if the party in power – the one behind the Online Safety Bill – has spent eleven years stripping all of those safeguards to the bone.
So the success of the Online Safety Bill will depend on whether government intends to approach these issues in good faith, or whether what it’s really doing is laying down as many trip hazards as possible, on purpose, for any site or service which holds any content on anything, whether that’s good or $bad.
This may seem like a lot to make out of a single sentence in an hour-long debate, but in my view, it reflects the heart of the problem with this Bill. People want easy answers to tough problems.
They’re not looking in the right place.
Thanks for that write up. The problem you are talking about seems to me to be a cousin of the censorship of historical topics like slavery under the guise of “protecting the children” that is happening here in the United States.
The advocates for white-washing of events, historical and in near real time, are transparently acting in bad faith.