We already know that algorithms aren't neutral—and the proof is there every day. But what does that mean exactly? And, after all, what is an algorithm and how does it work on digital platforms?
Hi. My name is Laura, I have a master's degree in communication and I'm an undergraduate student in internet systems. Today I want to talk about how algorithms operate and how we can investigate them even without access to the source code.
Remember a few years ago, when many people believed that social media algorithms were impartial? We thought that reach and suggestions were the result of objective calculations that simply rewarded or not rewarded good content.
This conception of impartiality only benefited the big tech companies that own the algorithms, because it created the feeling that they were entirely based on calculations, without any human interference—and therefore, essentially fair.
Today we know that, in addition to human decisions being at the heart of these processes, platforms have already acted deliberately to amplify or obscure people and ideas. An algorithm is not "the" only way, but rather ONE OF the ways to arrive at an expected result.
On digital platforms, an algorithm is not just a list of fixed steps. It is modular, formed by different parts that fit together like pieces of a puzzle. Over time, these pieces are replaced, adapted, and integrated into new functions, therefore CONSTANTLY evolving and changing.
There is no single 'the' algorithm of a platform. What we call 'the YouTube algorithm' or 'the Facebook algorithm' is, in fact, an ecosystem of many interdependent modules and processes: recommendation systems, spam filters, pattern detection, content prioritization.
Notice how many opportunities companies have to deliberate on how their system will operate. From what data will be collected to what should or should not be considered relevant in obtaining results. With this, the user has lost almost all control over what they see. Their feed is the result of thousands of automatic, invisible, and constantly changing micro-decisions.
And what can we do about it? According to the author Tania Bucher, what is at stake is not discovering the 'magic formula' of an algorithm, but to develop a critical understanding of the mechanisms, steps, and operational logic that guide the functioning of the software.
The algorithm IS NOT NEUTRAL. In addition to always demanding greater transparency, we have three complementary paths: 1) Leverage officially released information; 2) Conduct experiments and independent audits; 3) Take seriously the perceptions and theories that users themselves construct.
Together, these approaches don't completely 'open' the algorithm's black box, but they create cracks through which it's possible to observe its logic and effects in the real world. And you… can you still identify when your feed is being manipulated?