Chapter 2 - The decontextualization of algorithmic systems

Filtering and categorization of information is inherently dependent on societal and cultural contexts. Human beings, when faced with the task of prioritizing information, can implicitly and explicitly pull from personal and societal wealth of domain expertise, as well as psychological, ethical, cultural, and political experience and knowledge. Formative studies in non-STEM subjects such as journalism, communication theory and political sciences, give great weight to learning about, argumenting, and analysing complex and nuanced topics and their outcomes. During the course of work, journalists and other gatekeepers of information base their selections on this type of personal knowledge, as well as sociopolitical affiliations and economic allegiances, in addition to quantitative and qualitative analysis of public interest. This creation of implicit and explicit context is the cornerstone of information prioritization of human gatekeepers.

Given the increasing role of algorithms in mediating everyday life [...], it is vital that we develop more critical ways of thinking about them that does not keep particular viewpoints ‘strictly out of frame’ and which situate them within their wider socio-technical assemblages. This requires technical approaches to be complemented by perspectives that consider: the discursive logic driving the propensity to translate practices and systems into computation; how the practices of coding algorithms are thoroughly social, cultural, political and economic in nature; and how algorithms perform diverse tasks, much of which raises political, economic and ethical concerns. (Kitchin, 2014, p. 7)

There is also a deeper, neurological level of explanation of how human brains make decisions: this is out of scope of this paper and not within the writer’s expertise, so in-depth analyses of the exact process of information selection and decision-making won’t be addressed. However, it is useful for the purpose of this argument to notice there is a distinction between the neurological process of decision-making and the plethora of contextual information that feeds into which decisions will be made. We can posit that there is a distinction between the functional biological aspect of decision-making processes (which neurons fire, based on which stimuli, what is the ratio of involvement of different brain sections when a given choice is being made) and the substantive, contextual background that drives and informs this process. For humans, the functional and substantive processes are intrinsically enmeshed, in the sense that we have no conscious way of separating them when making decisions. I argue that in the algorithmic decision-making process, the focus is firmly and squarely on the functional process, while we are far behind in understanding and creating a substantive process for algorithmic decision-making. This is not a coincidence, either: algorithmic processes lack context by design.

The notion that nearly everything we do can be broken down into and processed through algorithms is inherently highly reductionist. It assumes that complex, often fuzzy, relational, and contextual social and economic interactions and modes of being can be logically disassembled, modelled and translated, whilst only losing a minimum amount of tacit knowledge and situational contingencies. (Kitchin, 2014, p. 8)

It is long-standing practice to decouple the study of algorithms from qualitative context: as a rule, algorithms are seen and taught as a purely logical problem. The study of computer-aided methods of problem solving stem organically from (and are intrinsically involved with) mathematics - as such, the study of algorithms is strongly focused on efficiency. This means any given problem is reduced to the fastest, most efficient computation — and computer science curricula actively enforce this paradigm. This of course is nothing new in academia: isolating a problem and reducing externalities to the minimal set of controllable variables is a fundamental aspect of the scientific method. The main issue at hand is that the overwhelming need for algorithms to filter the information avalanche means that this learning process is directly translated from classrooms to real-life scenarios — without passing through a humanization process of understanding the importance of contextual externalities.

An example of the inhuman wreaking havoc in the real world is the subprime financial crisis of the ‘00s. The crisis was based on the over-confidence in short-term gains of selling variable-rate mortgages to people with low credit rates in the United States, who were then forced to overwhelmingly default on their mortgages once the variable rates raised considerably the monthly mortgage fees. However, one of the underlying reasons this very obvious problem was left to grow until it endangered financial markets across the world and spurred the greatest recession in history is that the real-life context was far removed from the operatives in the financial markets through an extraordinarily complex system of algorithmically-driven creation and packaging of financial assets. In other words, the financial assets that were being bought and sold (derivatives, futures, CDOs) were created through complex financial math, with the primary goal of maximum efficiency, that sterilized those products from their messy, complex real-life meaning, to make them easier to sell. As a consequence, these financial products became so completely integrated in the financial markets that, once they inevitably crumbled, the results were catastrophic and global.

This humanization process might be improved by a stronger disciplinary overlap between computer science and humanities. For example, a number of universities have starting building curricula5 that analyse how creation of algorithmic functions and data analysis fit within other aspects of social, political and economic life. Currently, however, the programmers in charge of creating computer-aided (or computer-only) filtering mechanisms for information have been trained in the last 20 years within the efficiency-first “algorithm as logical problem” paradigm, and the fairly recent interest in interdisciplinarity of algorithms and data still needs to seep in the collective consciousness. This means that the people in charge of writing the scripts that decide what information is more or less important select their priority on a mindset that prizes efficiency over context, and subsequently impose that mindset into the information selection process.

5. Like University of Copenhagen’s PhD course in Big Data and Ethics [http://en.itu.dk/Research/PhD-Programme/PhD-Courses/PhD-courses-2016/PhD-Course---Big-Data-and-Ethics?CookieConsent=yes], and the edX course in Data Science Ethics [https://www.edx.org/course/data-science-ethics-michiganx-ds101x]

results matching ""

    No results matching ""