Skip to main content

Will Big Tech decide who to censor?

13.01.2021
Raymonde Weyzen
Opinion

After 6 Wednesday 2021, the world witnessed how a United States president was banned from social media platforms – that he made fervent use of – on account of “the risk of further incitement of violence”1that his posts may cause following the riot and breach of the U.S. Capitol last week. The banning of Donald Trump from Twitter in particular is, however unique it may be, not the only example of Big Tech stepping in to control freedom of speech. Amazon also recently took matters into their own hands by closing off the servers on which Parler, an alternative social networking and microblogging application, run. The application is associated with right-wing extremism and conspiracy theories and is supposedly a hotbed of hateful comments, racism, and violence. Apple and Google already barred Parler from their app stores due to incitement of violence that the application might cause (Verhagen, 2021)2.

Though this may be true, and it is likely with the best intentions that these companies are using their enormous power to control the discussion, the fact that Google, Apple, Facebook, Amazon, Microsoft (GAFAM) is now deciding which information reaches us, and the opposite, which information is kept from us, cannot be denied. For social media channels, this is of course nothing new. If anything, they are entitled to control what is being said on their platform to some extent as they operate similar to a media company. If tech companies, and even a retail company, however, also get a say in this, we unlock a new level of diffusion. Another example of this is the retraction of financial aid for political campaigns by several large companies, e.g. Hallmark Cards3.

As of now, tech companies can operate in a relatively grey area as they are neither classified as strictly a technology company, nor as a media company. This offers them the possibility to operate without having to account for their actions the way media companies have had to. However, in the U.S. and in Europe there are strong advocates for more stringent legislation4. The European Commission has already taken action in this regard by releasing the Digital Services Act, which would ensure that platforms reveal how they decide to remove content or even an account.

Though it seems reasonable that, because of the power and influence that these tech companies have, governments impose regulations and hold them accountable, it also goes against the notion of freedom of speech. Sharing information is a vital contribution to freedom of speech and the absence of regulations makes it hard to see what is misinformation and what is inciting violence. By barring opinions that do not align with the status quo or the accepted opinion, social media might distort reality more than we like to admit. Where do we draw the line between monitoring information and preventing violence?

Tweet
Image credit:
© 2020, TechCrunch