Welcome to Algocracy: The Power of Algorithmic Bias
Search for a job offer. File a complaint with customer service. Inform us about a loan or other banking product. Book a plane or hotel ticket. The news, information, and advertisements that you see when entering your social media… Algorithms are behind infinite daily tasks and in more sectors than we imagine, leading to a phenomenon known as algorithmic bias.
They’re those silent mechanisms that, more and more, move the world without us hardly noticing. And most striking of all is that they learn to be increasingly effective with the data that we provide them. They’re constantly adapting to the human being – those that are coded as “users” -, attempting to offer an increasingly personalized, fast, and satisfactory experience so that the effort is minimal.
Its tentacles reach dating applications, where it can even mediate our choice of a partner by presenting us with a series of very specific candidates supposedly based on our preferences. This foray into the digital world is such that algorithmic bias is even believed to influence our political decisions…
Algorithms seek to make our lives easier, but in reality, they’re deciding for us.
What is algorithmic bias?
Algorithms can make our lives easier. If you’re a nature lover and defender of the environment, for example, it’s very likely that you’ll find more and more information related to this topic on your social media. This isn’t necessarily a bad thing, of course. However, things change when your concerns aren’t exactly healthy.
Let’s not forget the terrible case of Molly Russell. This teenager was looking for topics associated with suicide, and there came a time when everything her social networks showed her was content related to that theme. Almost without realizing it, we can remain captive in an information bubble within which other trends and varied content no longer make their way in.
Algorithmic bias refers to our false sense of control over the information we receive and the decisions we make because there’s a disturbing code that seeps into our daily lives, persistently offering us data that’s almost never impartial. But let’s remember, there are almost always hidden interests.
You don’t decide; the algorithm decides for you
There’s a phenomenon that we’re seeing more and more frequently in the normalization of artificial intelligence. AI gives us a false sense of control and self-efficacy. That feeling will increase much more when the use of ChatGPT is normalized to help us in our theses, academic works, and endless daily tasks.
We’ll feel more efficient, but in reality, it’ll be that chatbot that’ll carry out the tasks that pertain to us. This isn’t always negative, but it does increase the aforementioned algorithmic bias. That is, the perception that we decide and act without any interference when this isn’t the case.
Algorithms aren’t fair
Cathy O’Neil is a mathematician who wrote a very popular book entitled Weapons of Mathematical Destruction (2016). In this paper, she described algorithms as “weapons of mass destruction.” For starters, these computational values aren’t exempt from moral and cultural biases, not to mention the interests behind them.
In the book, she talks about the case of a teacher who was fired as a result of a negative evaluation of her carried out by an algorithm. Data from personal messages, medical reports, and more were analyzed in this evaluation. The same happens when assessing the assignment of mortgages or aid. Certain ethnic groups, for example, will always be at a disadvantage.
However, most companies and organizations validate these quick analyses. Algorithm bias leads them to conclude that what an algorithm analyzes will always be valid, even if it’s not fair, and often, these data aren’t even collated personally.
Technology with chatbots and algorithms is here to stay and will influence a large part of our tasks and decisions.
Algocracy: Algorithms at the service of politics
We often say that politicians are far removed from the real problems of the people. We question their ideas because they don’t meet the needs of their fellow citizens. Another criticism is their excessive spending on advisors, mismanagement, and even their mistakes when making decisions and legislating.
Recently, an investigation published by the Deloitte consultancy told us about something very striking. There could be a future in which algorithms and artificial intelligence take over a good part of the tasks of politicians. They can simply analyze the data that big technology companies collect about us with our mobile phones. This way, they’d know our needs to give more appropriate social responses.
Likewise, artificial intelligence can be trained so that political management isn’t fraudulent. Its analytical capacity would replace a multitude of advisers and would save an infinity of work for public bodies. Algocracy, understood as the power of algorithms to replace the work of politicians, may seem most dystopian to us, but it’s a real possibility.
The University of Utrecht conducted a study that showed that letting algorithms replace all the bureaucratic aspects of government organizations could be beneficial. The reason? Citizens tend to place greater trust in the management that a machine can carry out than in that carried out by a politician (another obvious bias).
Algorithmic bias is here to stay, and it’s only getting stronger. We’ll continue to think that many of the purchases we make, the people we pay attention to on social networks, or the ideas that we hold as true are the product of our will. We’ll continue to perceive ourselves as having free minds, when, in reality, we’ll silently become more and more conditioned.
We see this especially in young people who are increasingly unhappy because they live in a digital universe based on social comparison. We must understand that algorithms aren’t entities that arise on their own. Rather, there are large companies behind them that program them. And such programming always has a goal.
If we’re headed for a future in which people and artificial intelligence work together, we need those who train and program AI to be transparent and start from more ethical, fair, moral, and healthy values. We must regulate these mechanisms, which are increasingly changing the behavior of users. That is, each and every one of us.It might interest you...
All cited sources were thoroughly reviewed by our team to ensure their quality, reliability, currency, and validity. The bibliography of this article was considered reliable and of academic or scientific accuracy.
- Lorenz, Lukas & Meijer, Albert & Schuppan, Tino. (2020). The Algocracy as a new ideal type for government organizations: Predictive olicing in Berlin as an empirical case. Information Polity. 26. 1-16. 10.3233/IP-200279.
- Informe Deloitte: How artificial intelligence could transform government: https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/artificial-intelligence-government-summary.html