< BACK TO CONTENT DATABASE
[ PUBLISHED: 2025.22.10 ]

Regarding the article by Amoore et al (2024) and how to think AI politically

According to Amoore et al (2024), the logics of large language models to predict, compress and infer have clear political effects and are already forms of power in operation in the present.
/// Laura Barros
/// laurabarros5@gmail.com
/// insta: @laurabarros5

When we talk about artificial intelligence, we usually think of innovation and productivity. But researcher Louise Amoore proposes something else: she wants us to think about AI politically.

Hi, I'm Laura, I have a master's degree in Communication and Information and I'm an undergraduate student in Internet Systems. Today I want to discuss the political logics embedded in generative AI and how they produce new forms of power.

Amoore presents a methodological proposal: to observe the mechanisms of AI — how it learns, combines, and decides — to understand its role in governance. She identifies four main political logics: generativity, latency, attention, and zero-shot.

The logic of generativity means AI doesn't just recognize patterns; it imagines possibilities. In systems like Palantir AIP, used in military contexts, technical calculations become political decisions because the AI suggests how to act in the world based on strategic simulations.

The logic of latency refers to the "latent space" — a mathematical map where the system compresses billions of information points into hidden traits. When an AI creates a face from scratch, it navigates these invisible relationships, often reproducing deep-seated biases and stereotypes.

Then we have the logic of attention. In models like ChatGPT, the AI "learns" what should receive focus at any given moment. This reorganizes the world in terms of relevance; whatever stays outside this focus simply ceases to exist for the model and for our collective knowledge.

The fourth is the zero-shot logic. This means the AI acts even without specific prior training for a task, deducing what to do based on general probabilities. When used by governments to respond to crises, it creates a "politics of automated improvisation" — decisions made without real historical context.

These logics show that AI is not just a tool, but a space of governance. Each model carries a way of seeing and ordering the world. It doesn't just represent reality — it produces it by creating possible worlds while erasing others.

Human experience is being replaced by probabilistic calculation. This shifts the power of defining what is true or relevant from social processes to invisible algorithmic infrastructures controlled by a few corporations.

The algorithm IS NOT NEUTRAL. AI is a space of political dispute because its "mathematical" decisions have clear political effects and are already forms of power in operation in our present.

Ultimately, the essential question that emerges is: who defines the logics that organize our reality? I want to know your thoughts on this "automated improvisation" in the comments.