Across the world, governments are starting to wake up to the problems caused by unchecked “free speech” online. They are realising not only that they must act, but also that action is possible and will be popular.
In the US the new administration is showing signs that it will grasp the challenge posed by the spread of hate speech and disinformation. The EU has unveiled its own proposals in the form of a Digital Services Act, and the UK is one of many European nations to have suggested domestic legislation with a White Paper and Fact Sheet on “Online Harms”. And of course the platforms are belatedly trying to get ahead of the debate by taking limited action themselves.
Nearly two years ago Robert McLeod and I founded “Clean Up The Internet”. In doing so we were prompted by a growing concern that the level of discourse online had become seriously degraded. Twitter, which I had originally joined for the puns and memes, was an increasingly toxic environment, polluted by lies, distortions and abuse. And each time I got drawn into one of those discussion threads I was struck by how hard it was for me to know whether my interlocutor was a real person, or at least was the person they were pretending to be. Was I really interacting with an ex-nurse and mum of four from Leeds, who seemed to have unlimited time to express very strong views on a range of culture war topics?
Do we really have a problem?
Our first action was to commission polling from YouGov, to identify whether our diagnosis of the problem was more widely shared and whether there would be public support for government intervention to fix it.
There are so many things about the online landscape that need fixing. Recently we have seen an alarming rise in racist abuse aimed at footballers, or at least we have become far more conscious of it. A number of former politicians have come forward to say they were practically driven off social media by the volume of hateful and threatening posts they received. Conspiracy theories around 5G, Covid and vaccine safety cause anxiety and undermine public health messaging, sometimes with fatal results.
What was apparent was that anonymous, pseudonymous or blatantly fake accounts were making a major contribution to these various online harms (yes, this word now has a plural). Inevitably, a close look at the list of “followers” of some of the more provocative individuals online would reveal a large number of accounts that did not look authentic. But the apparent high follower count would give an aura of credibility, and those follower accounts would then amplify the messaging, helping it to spread far wider and faster. Aside from generating and spreading their own dubious content, they would also pile into other discussions, derailing attempts at reasoned debate or swamping them in a tide of irrelevance, vitriol and disinformation. We commissioned research that indeed showed that anonymous accounts were disproportionately responsible for spreading harmful covid disinformation.
But how to deal with it?
We took as our starting point that there are many occasions when a citizen might want to withhold their identity. That could range from dissidents who feared retaliation from their governments, whistle-blowers who would face disciplinary action from their employers or individuals who simply wanted to express thoughts and concerns without necessarily experiencing a backlash. All of these could be both legitimate and important to a free society. On the other hand, it was not clear to us that an individual who chose to set up an anonymous account should be able to use that freely to disrupt the conversations of others, or to send them unsolicited abuse. One person’s free speech should not always trump someone else’s.
The solution we arrived at, and have proposed in our submission to the government’s consultation on its proposed Online Harms bill, is relatively simple. We believe it both strikes a proportionate balance and would be relatively easy to implement.
Firstly, we would want every social media user to be given the option of being verified. To take Twitter as an example, that would mean every user (not just politicians, celebrities, etc) would be able to have a “blue tick” that would confirm they are indeed who they pretend to be.
Secondly, the verification status would be visible to all users. That would allow each user, seeing that another had chosen not to be verified, to exercise their own judgment regarding what that signalled. They could then decide how likely it was that the unverified user was who they said they were and whether their information could be trusted.
And finally, every user who has been verified would be able with a single click to select that they did not want to hear from or be forced to interact with any other user who was not verified. No longer would they first have to be subjected to unwanted or abusive messages and then forced to take steps individually to block or mute them one by one.
In essence we are taking a design approach to the problem
We recognise that much abusive, misogynistic, or racist language, as well as much disinformation, is promulgated by well-known figures, and that other steps will need to be taken to try to deal with that. But while it is hard to proscribe content and to determine when something that is not illegal is still harmful. That often depends on intent and context. Our proposals would at a stroke reduce the speed at which such material can spread, and would give a degree of power back to the individual users.
With respect to verification, we have on our website a document that discusses some possible approaches. We prefer to focus on outcomes rather than mandating particular solutions, as there are many possible ways of handling verification. The key principle we insist on is that the data provided should be secure and that it should not be used by the platforms for any other purposes, such as harvesting more information on their users for ad targeting and other commercial aims.
Ideally the platforms would recognize their responsibility and would take steps voluntarily to introduce reforms along the lines we are proposing. However it is more likely that they will only do so when faced with the inevitability of regulation. We therefore also have some thoughts on how to make these proposals effective. One would be that where a platform chose not to adopt such a system but continued to leave users exposed to posts from unverified users, the platform should be held responsible for the content. In other words there would be a limited exception to the principle that platforms are treated as mere conduits and not as publishers. In addition the platforms could be required to nominate a senior executive, located within the jurisdiction, who would take personal responsibility for ensuring the effective implementation of the rules.
Regulation could incentivise platforms to adopt our solution
One could appeal to the public spiritedness of the platforms, but that is probably unrealistic. But once advertisers see that most of the accounts they want to reach are verified, it is very possible they will stipulate that they will not pay for ads displayed alongside or promoted to unverified accounts. That in turn might well incentivise the platforms to clean out the multitude of manifestly fake accounts that they would no longer be able to “monetise”.
We can anticipate a range of objections to these proposals. Some will protest that it is all too complicated, will be technically challenging or even that it will somehow impinge on fundamental human rights such as free speech. In the past such arguments have succeeded in delaying and frustrating action.
It is clear that the public mood has shifted and there is widespread demand for change. Moreover the platforms are taking a major risk that if they don’t embrace that demand they would find themselves on the receiving end of far more draconian action.
Stephen Kinsella OBE is founder of Clean Up the Internet and lives in Stroud.
Ed: Don’t let this article stop you sharing our content on social media, but please do it responsibly!