De-platforming, does it work?

Imlisanen Jamir

Big tech seems to have apparently “woken up” to what they have for years now let run rampant—hate speech.

The recent moves by tech giants, prominent among them Google, Facebook and Twitter, to de-platform who they deem as hate mongers and those who incite violence opens a Pandora’s box of questions on free speech and the role of private platforms.

The law, at least in terms of the government, is somewhat clear (as clear as it can get when something like free expression is concerned). However, in this internet age where expressing people’s voices are limited to a few corporations’ platforms, the latter’s editorial responsibility and their actual role as media entities are questions to ponder upon.

The recent de-platforming drive has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom.

As reprehensible as the actions of people like the current US President (the most famous victim to big tech’s recent drive), along with his cohorts are, the rampant use of the internet to foment violence and hate, and reinforce bigotry is about more than any one personality. He is certainly not the first politician to exploit the architecture of the internet in this way, and he won’t be the last.

In an ideal world, a space unencumbered by structural racism and prolific bigotry, it might make sense for online platforms to take a neutral stance on content regulation. If the past years have taught us anything, it’s that we do not live in such a world.

As private entities, these corporations are within their rights to deny service to customers who do not abide by their terms of services. But when entities such as the ones mentioned above transcend beyond mere service providers as they have for years now, there are pangs of concern whenever there is any censorship.

The politics of dissent and offence shape the free speech argument, and any moves towards censorship, however well intended, paves a very slippery slope for free expression.

We need solutions that don’t start after untold damage has been done. Changing these dangerous dynamics requires more than just the temporary silencing or permanent removal of bad actors from social media platforms.
Additional precise and specific actions must also be taken. 

For instance, reveal who is paying for advertisements, how much they are paying and who is being targeted. Commit to meaningful transparency of platform algorithms so we know how and what content is being amplified, to whom, and the associated impact.

Turn on by default the tools to amplify factual voices over disinformation. Work with independent researchers to facilitate in-depth studies of the platforms’ impact on people and our societies, and what we can do to improve things.

These are actions the platforms can and should commit to today. The answer is not to do away with the internet, but to build a better one that can withstand and gird against these types of challenges.

This is how we can begin to do that.

Comments can be sent to imlisanenjamir@gmail.com