Let your competitors work harder – Let your AI work smarter – Let us show you how to win!

COTOAGA.AI

by cotoaga.net

Cognitive Devolution

We Are the Main Dish

How a century of training humans to think like machines arrived at its logical conclusion.

From Sphere to Vector

Mathematical Model of Cognitive Devolution
Cognitive Sphere
Polymath · Intradisciplinary · Spherical
ΣΦΑΙΡΑ VECTOR
r(θ,φ,t) = S(t) · [1 + H(t) · Y(θ,φ)] · D(θ,t)
Superquadric × Spherical Harmonics × Axial Deformation

An April 2026 research paper from USC gave a name to what I've been watching for two years. They call it WHELM — Western, high-income, educated, liberal, male — and it's the cognitive shape your thinking is being quietly compressed into every time you ask ChatGPT to help you write, decide, or think. The training loop runs both ways now: the model learned from us, and now we are learning from the model. Sanding down our voices. Pre-flattening our reasoning. Optimising ourselves to be easier for the machine to process.I should know. I helped build the first version of this trap, twenty-five years ago, in finance. And I've been watching the third wave land, in real time, in European boardrooms.


Groundhog Day in the C-Suite

For two years I've been having the same conversation, in different cities, with different executives, in different industries. The conversation is about the EU AI Act and what they intend to do with their organisation in light of it. Until recently there were two answers.

Answer one: Which act?

Answer two: That doesn't apply to us.

Bad, but manageable. Both responses are forms of not yet. You can work with not-yet. You can come back next quarter with a better case, a sharper example, a more recent fine.

Then, around end of 2025 and beginning of 2026, a third answer started showing up. It is far more dangerous than the other two combined.

Answer three: We can prompt quite well, thank you for asking.

The third answer doesn't refuse the conversation. It declares the conversation already won. The executive is not avoiding AI. The executive has integrated AI — by which they mean their team has gotten good at typing things into ChatGPT and pasting the output into emails, decks, and decisions.This is not adoption. It is the third wave of a process that has been running for a century.

And it ends with us as the main course.


The Three Waves

Stand far enough back and the trajectory is clean.

First wave: we standardised human learning.
Through eight centuries of optimisation pressure — the medieval university, the Prussian school system, the credit-hour, the Bologna Process — we converted a sphere of human cognitive potential into a vector of measurable competencies. We stopped asking what a person knows and started asking how many modules they've completed. We taught humans to think in the shape institutions could grade.

Second wave: we standardised that knowledge into LLMs.
We took the textbooks, the papers, the corporate playbooks, the Stack Overflow answers, the trained-on-standardised-curricula prose of a billion people — and fed it into machines. Then we trained the machines to give it back to us in clean, polite, vector-shaped responses. This is what people mean when they say AI was trained on us. They are right. But they are not seeing the whole picture.

Third wave: we are standardising ourselves to fit the machine.
This is the wave we are in now, and most people do not yet see it because it feels like productivity. We are reshaping how we write, how we argue, how we structure a thought, how we ask a question — all to be more legible to a system whose architecture rewards a particular kind of crisp, polite, WHELM-shaped reasoning. We are not using AI. We are auditioning for it.

We spent a century training humans to be biological AI systems. The silicon versions have arrived to collect their inheritance. And now, like the main course on the table, we are politely asking the diners how they would like us spiced.

A Confession from the First Wave

I am not pointing at anyone. I built the first version of this.

In 1997 I started building decision-support systems for retail investors. The pitch was beautiful: democratized financial investment expertise.

Take the math the institutions had — Markowitz, Black-Scholes, risk-adjusted portfolios — and put it in a tool a private investor could use. Let them decide like a grown-up.The methods were real. The interfaces were clean. The risk questionnaires were thoughtfully designed. And every single one of those systems was quietly tweaked so that it always pointed the user toward products the industry was selling.

We called it win-win-win. I had the good feeling that the right methods were being applied.
The investor had the good feeling that the decision had been well thought through.
The financial service provider had the best feeling of all: they kept their margins.

Three feelings. One transaction. Two parties got a story; one party got the money. That was the first wave from the inside. We were not capturing expertise. We were teaching humans to feel like they were exercising it, while the system did the actual choosing. The user learned to think inside the tool. Outside the tool, they were as helpless as ever — but now they had a printout, a credential, and a sense of competence. The LLMs do this at planetary scale now. The mechanism is identical. Only the margin has changed hands.

Sphere to Vector, in Plain English

Watch a five-year-old for ten minutes. She will see a dragon in a cloud. He will ask why money exists. They will tell you that the man at the bakery is sad today, and they will be right.

She will connect a sound, a color, a memory, and a question into something that has no name and does not need one.

Her thinking is a sphere: dense in every direction, connected to everything, organized around no single axis.

Now watch a twenty-two-year-old fresh graduate. He can answer one question very well — the question his degree was optimized to grade him on. He has been trained, over sixteen years, to compress that sphere into a vector: a single direction, measurable, comparable, gradable, hireable. The cloud-dragons are gone. The "why does money exist" muscle has atrophied. They have all been vectorised.

This is not just a metaphor. Inside a large language model, your prompt is literally tokenized, embedded as a vector in high-dimensional space, and processed along narrow attention pathways.

The educational pipeline we built does not just resemble that architecture. It is that architecture, in biological form. Sixteen years of school turn a sphere into something a vector machine can read. The machine arrived to read it.


The Choice

You can keep training yourself to be a more useful input. The machine will reward you for it, briefly, before it discovers it doesn't need you to be useful at all.

Or you can begin the slow, unglamorous, decade-scale work of being a sphere again.

The full case — 125 pages, 73 citations, considerably less polite than this essay — is in the paper. This web page is the essence of a summary.

Warning: it is long, dense, and was written for the part of you that the third wave is currently trying to flatten. The jester recommends reading it anyway. The court rarely thanks him.

We are the main dish. The dinner has begun.

The only question left is whether we spend the rest of the meal asking the diners how we'd like to be seasoned — or whether we get up from the table.

Download the Cognitive Devolution paper