This blog entry has previously been published at:  

For a while now it has been quite a conundrum: How might I breach the subject of algovernmentality without becoming overly arcane and artificial?

Finally, Facebook has deigned to step in and solve the problem for me by providing a wonderful example:

In their eternal quest for a better understanding of their users, Facebook, or to be exact, Kramer et al. conducted a tiny experiment: Quite innocently they argued that subtle shifts in emotions in an individual’s social network can change the mood of aforementioned individual. To prove their hypothesis, the researchers modified the algorithm for the newsfeeds of 310 000 users. Since everybody is so very social and has hundreds of intimate friends, if Facebook displayed every single post, such as what they was just devoured for dinner, people would quickly be overwhelmed. Therefore, Facebook quietly filters these posts and only displays a subset it deems interesting to any given user. The newsfeed algorithm has the task of deciding what to display, and what to hide for every newsfeed – it tries to model what is interesting for the user. And this algorithm was changed in order to either hide posts with ‘happy’ or ‘sad’ content (words). Furthermore, there was a control group, where random posts were hidden. The researchers then found that in fact users were influenced by the overall emotions posted in their feed, meaning that those with happier newsfeeds were more likely to broadcast happy messages to the world, and vice versa.

In reaction to this publication, there was an outcry in the academic community, which criticised the ethics of conducting such an experiment with involuntary human participants. Unfortunately, the reality is that regardless of any experiments, Facebook’s algorithm already curates the content shared by its users. And this is just one company amongst many, where algorithms are employed: Another example is Google which scans every email sent through its services in order to provide personalised advertisements. And if the algorithms decide that content of the message may be objectionable, the Email is flagged and sent to a real human who reviews the message. This led to the arrest of one John Skillern on the suspicion of child pornography (cf. Kravets). While Google’s actions are commendable in this case, how many mails are read by unrelated third parties because of such a flag, and how thin is the line between finding criminals, and undermining free thought? Would it still be ethical, if Google reported dissidents to a regime?

Wherever your opinion on this matter falls, one fact remains: Humankind is increasingly being governed by algorithms – hence the term “Algovernmentality”. In contrast to our fuzzy ways of reasoning, algorithms do not err. They simply calculate properties and probabilities to generate hypotheses – often binary outcomes. No matter how intelligent a machine is used, even if it ‘understands’ natural language like IBM’s Watson (the computer who won the game Jeopardy against human players), currently computers are just on a level where they can guess at numerical outcomes. With a world that is increasingly shaped by the requirements of algorithms – for instance when a lane for a high-speed network is cut from the East Coast to the West Coast of America – the world is also increasingly dominated by algorithmical thinking. To this effect, we begin to rely more on arbitrary numbers, instead of relying on human logic. Decisions are made based on statistics, not on experience. If these decisions are also made without human oversight, algorithmic logic can lead to astonishing situations.

For instance, the price of one book on Amazon suddenly rose by several million US dollars. Or perchance more alarmingly, 50-70% of the trading volume on the US equity market is actually set in motion by algorithms who can display catastrophic glitches, such as in the May 6, 2010 “Flash Crash” (Sornette & Von der Becke 2011). However, quasi-intelligent algorithms also have their worth – they can not only solve problems that would be too time-consuming for humans to solve, they can also become quite accurate in their decisions. One example is an algorithm that correctly predicts the votes of the individual judges of the US Supreme Court with an accuracy of 70.9% over more than 7’700 cases with only data available prior to the decision (Josh Blackman et al.). In comparison, humans get it right around 75% of the time, but already being almost as accurate as a group of human experts is amazing. It is also telling how this algorithm comes to its intelligence: The program randomly assigns weights to the different variables, such as the age or gender of the respective judge, and then decides which model fits the outcome best. Therefore, the algorithm ‘learns’ to predict future outcomes by looking at past decisions and guessing at the factors influencing the judge’s decision.

What should we take away from the increasing presence of algorithms in our society? I agree that algorithms are awesome, certainly. But as with any form of governance, algovernmentality also needs oversight – I have already mentioned several examples for situations where algorithms did not work as intended. And since algorithms are usually created to solve one particular problem, they might not be suitable if the problem changes unexpectedly.

Fortunately, there are intellectuals like Evgeny Morozov, or rebels like Edward Snowden, who publicise the ‘dark side’ of the current computer and Internet technology. Still, it should be the task of each and every one of us to think hard about our collective usage and dependency on technology and how we as individuals should act when faced with this new form of governmentality.


Algorithm is just a fancy way of saying ‘computer program’.

Sources & Further Reading

Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 201320040. .

Gallagher, S. (2014). Air force research: How to use social media to control people like drones. Ars Technica. .

Slavin, K. (2011). How Algorithms Shape Our World. TED Talk. .

Sornette, D., & Von der Becke, S. (2011). Crashes and high frequency trading.Swiss Finance Institute Research Paper, (11-63).

Katz, D. M., Bommarito, M. J., & Blackman, J. (2014). Predicting the Behavior of the Supreme Court of the United States: A General Approach. Available at SSRN 2463244.

Morozov, E. (2013). To save everything, click here: Technology, solutionism, and the urge to fix problems that don’t exist. Penguin UK.

But the subject is not new, it has famously been thematised by Kevin Slavin in 2011, in his TED talk “How algorithms shape our world”.


Popular Posts