4.4 - Mitigation techniques

While this paper doesn’t presume to have answers to all of the questions raised throughout, we will try to collect a list of possible mitigation techniques that can help with minimizing the influence of algorithmic gatekeeping on public perception. It is however important to mention that we don’t believe that there is a way to avoid algorithmic gatekeepers, either by returning to the old system, or by proposing a new paradigm. Algorithms, for better or for worse, are here to stay — because, within our current knowledge, there is nothing comparable that can take their place. Lastly, before listing mitigation techniques, it is important to mention that there is no guarantee that these techniques will improve the quality of results in any meaningful way — they will however assist individuals with critical thinking around algorithmic results, and modifying the individual algorithmic parsing.

Awareness

The first step to minimizing the effects of algorithmic content filtering is realizing that it exists, and understanding the issues and challenges that it presents. It is likewise important to become familiar with the differences between human and algorithmic gatekeeping, and avoid transferring old models of acceptance and parsing techniques to the algorithm-based content delivery. In fact, as seen before, there is no indication that algorithmic gatekeepers follow any of the ethical codes we commonly expect journalists to follow, and that is something to keep in mind when parsing information results.

Expansion of information sources

There are ways in which algorithm-driven content filtering can be forced to adapt, or expand on its offer. One important aspect on how algorithms work is custom-tailoring content to the individual, based on their browsing habits. In fact, most websites (and most certainly all commonly used information filtering platforms like Facebook, Twitter and Google) track a large array of interaction habits through a number of different tools. Cookies, for example, store actions taken on a website (clicks, scrolls, even time spent on certain parts of the website versus others). Social media integration buttons (the Facebook “Like,” or the Twitter “tweet” buttons), when added to a website, automatically start tracking user behavior and, if the same user is already logged in their service on the same browser, combine browsing habits inside and outside the platform into large, individual datasets of browsing habits. Algorithms then use that information to decide which content to prioritize when filtering. By using a number of anonymization browsing techniques9, we are able to force algorithms to produce results that haven’t been customized to our needs and thus might differ from the ones we would usually receive.

Manual influence over algorithms

Algorithms, through collecting browsing habit information, build an online representation of individuals. This digital persona is then used by algorithms when deciding which information to prioritize. It is possible to approach this machine learning process through an obfuscation and subversion perspective: individuals can purposefully click and open pages they wouldn’t normally visit, in order to “throw a wrench” into the system and force algorithmic filtering to provide different sets of information.

Codifying checks and balances

More than a technique, this method of mitigation is an idea and a hope for future development and critical thinking around algorithms. It is possible that in the future, algorithm development can be regulated through policy — though it is out of scope of this paper to propose systems to effectively reaching that point. However, one clear starting point could be advocating for access, by government-appointed institutions or academics, to “black box” algorithms so that they can study them and present an independent audit of the inner workings of algorithmic processes. This will avoid over-regulation, but provide the public of a more complete understanding of how choices are being made behind the algorithmic gate.

9. Most browsers offer the “private browsing” functionality, that starts a session without any logins, cookies or any previous information. Additionally, anonymization services like VPNs and the TOR protocol can hide the user’s identity online. For more information about anonymization tools visit Security in a Box by Frontline Defenders and Tactical Technology Collective [https://securityinabox.org/en] or the Electronic Frontier Foundation’s Surveillance Self Defense project [https://ssd.eff.org/]

results matching ""

    No results matching ""