In recent months, several studies and headlines have echoed an old concern: is ChatGPT making us dumber? An MIT study showed that people who use AI to write tend to engage fewer brain regions linked to memory and language.
Hi, I'm Laura, I have a master's degree in communication and I'm a student of internet systems. And I want to show you that this concern about the loss of cognitive ability due to technology has existed since the time of Plato.
In his dialogue Phaedrus, Plato saw writing as both a remedy and a poison. He argued that this dependence could atrophy our memory, since we would be delegating to technology the task of remembering for us.
It is curious how this same argument reappears now when we talk about language models. We feel we are transferring thinking and creating to a new form of writing: prompts. However, this fear stems from a deep philosophical confusion.
The ethics of technology are always political because they are shaped by human, corporate, and economic decisions. When we say "technology is dangerous" without naming the actors behind it, responsibility disappears.
Moralizing the debate — saying that "using AI is wrong" — is falling into the trap of big tech companies, which prefer a confused public to a critical and organized one. The real dispute is not technical, but about power relations.
The problem is not AI making us "dumb", but the fact that it is being pushed upon us by economic forces while we become increasingly dependent on its ability to produce to meet demands for accelerated productivity.
The algorithm IS NOT NEUTRAL. To have ethical AI, regulation, transparency, and accountability from companies are necessary. We must understand technology as an object of active political dispute.
Ultimately, the essential question that emerges is: who benefits from us focusing on "cognitive loss" instead of corporate power? And you... do you think we are losing our capacity to think, or just changing our tools?