3.3 - Restriction of content

Thus far, we have dedicated the chapter on algorithmic challenges to processes that result in different ways of filtering content automatically. It is important to mention another way content is currently being filtered explicitly: the active removal of content on behalf of algorithm owners, based on the perception of morality of the owners themselves, as well as their perceived level of public acceptance of offensive content. Again, it is important to mention this paper doesn’t aim to make a qualitative judgment on censorship7. Rather, it aims to analyse the drivers behind choices of restricting specific types of content, as a defining aspect of how algorithmic gatekeeping change the information the public is exposed to. Content restriction in itself is not inherently a negative mechanism: at the very least, there are incentives for content providers to automatically remove content that might prove offensive for their user base, as well as reasons of legal compliance over which algorithm owners have no power. However, the process of restricting outliers of outrage has an exclusivist, conformist connotation in the current ecosystem. As we saw in previous chapters, the predominance of certain types of information is influenced by implicit and explicit bias of algorithm owners and developers, as well as interests of those societal groups that are most remunerative and thus present the highest return of investment. This means that these groups’ agendas will influence what is considered “safe” and what needs to be removed from the platform, which in turn will impose those values onto other groups that might or not share them. For example, on Facebook and Instagram, two of the most common social media sharing platforms, a realistic photograph of the female nipple is almost exclusively considered “unsafe” (with minor exceptions, such as a photograph of a woman breastfeeding her child). At the same time, photographs and videos of weapons, violence, as well as instances of hate speech and verbal abuse of minorities or any type of discrimination, go through a completely opaque “vetting” process where Facebook and Instagram decide whether content needs to be removed according to rules no one outside the corporations is privy to.

An interesting case study into the cultural influence on content restriction is a video documentary by artist, activist and videographer Moritz Riesewieck. In the documentary, called “The Outsourced: who cares for our digital waste?” Riesewieck travels to the Philippines to collect ethnographic accounts of workers in charge of “content moderation” for big corporations. In this disturbingly poignant story, Riesewieck makes the case that the Philippines, one of the largest centres for content moderation in the world, have not been chosen because of the low salaries or skilled workers (there are, in fact, other countries, especially in South-East Asia, that have similar or lower average salaries with a similar level of skilled workers). Rather, the Philippines have been selected because it is a profoundly Christian country, and as such shares similar belief systems to the algorithm owners. (re:publica, 2016)

The lack of control or oversight on how restriction of content is implemented is particularly important if we see online social media platforms as the equivalent of public squares: for better or for worse, social media platforms have functionally become the digital equivalent of a public space for exchanging voices, opinions, art, activism and dialogue. However, these places aren’t public at all — they are privately owned and privately controlled, and as such follow the rules of algorithm owners. There is almost a cognitive dissonance with this issue, if we consider that Facebook, Twitter and other social media platforms have no accepted public alternatives, and as such, individuals cannot easily choose to not use them, because that would effectively cut them out of the societal loop. Therefore, the public is effectively locked in a proprietary model that imposes a content moderation model that individuals cannot influence.

Lastly, while content moderation is still prevalently done by human intervention, there is indication that machine learning is starting to surpass it. In May 2016, Tech Crunch published an article stating that Facebook is now receiving more reports on offensive photos from their algorithms than from human beings (Constine, 2016). It is safe to assume that this trend will continue to its logical conclusion of content restriction being implemented solely by algorithms, if we factor the cost effectiveness of a machine-only process versus human intervention.

7. For an overview of how online censorship affects freedom of speech, the reader is invited to access resources from http://onlinecensorship.org, in particular the 2016 report on the state of online censorship (Jessica Anderson, Matthew Stender, Sarah Myers West, Jillian C. York, 2016)

results matching ""

    No results matching ""