AI is a Mood, Not a Method
The Choice is Between Systems Designed to Impress and Systems Designed to Serve
Your phone’s keyboard used to be good. Remember? A few years ago, it learned your typing patterns, predicted your next word with uncanny accuracy, and rarely made you look like an idiot. It was one of those quiet miracles of modern software: personalized, elegant, useful.
Then some product manager said “AI” in a meeting.
Now your keyboard is laggy, generic, and wrong half the time. It suggests words you’d never use, corrects things that weren’t errors, and somehow manages to be both slower and less accurate than the system it replaced. This is enshittification in miniature: a working machine learning product rebranded as “AI,” burdened with compute it didn’t need, and worsened in the name of progress.
Every time you curse autocorrect, you’re feeling the myth at work.
Here’s the thing nobody in Silicon Valley wants to admit: “Artificial Intelligence” isn’t a field of study. It’s a mood, an eschatology with a grant proposal attached. It names an ambition: building machines with the cognitive capacities of a human being. Not a method.
That ambition is perennial, so we keep repeating the same ritual: we invent something that works, baptize it “AI,” discover it doesn’t produce consciousness, and quietly rebrand the useful residue as machine learning or data science or control theory.
That’s the loop. It’s geological. The ambition erupts, the lava cools into usable technology, and we forget it ever happened.
The Cycle We Keep Repeating
This isn’t new. In 1956, ten researchers gathered at Dartmouth College and proposed to “make machines use language, form abstractions, and improve themselves.” Within a decade, the dream collapsed under computational limits, but it left behind search algorithms, compilers, and the first glimmers that computers could solve problems without explicit instructions.
The 1970s brought expert systems: MYCIN diagnosed infections, DENDRAL identified chemical compounds. For a moment, AI was a business. Then the rule sets metastasized into hundreds of thousands of lines of “if A and B then C,” and companies collapsed under the weight of their own expertise. The dream of the “Expert System” died, but the “Business Logic” survived. Today, it runs nearly every bank transaction and insurance claim in the world, but we just call it enterprise software.
The pattern repeats through the decades: probabilistic networks in the ‘80s, statistical natural language processing in the 2000s, deep learning’s GPU-powered renaissance in the 2010s. Each era begins with breathless predictions of imminent artificial consciousness. Each ends with useful tools that nobody calls “AI” anymore.
Now, we’ve seen the sixth resurrection: the Foundation Era. Large language models trained on half the internet write sonnets and code that mostly compiles. OpenAI dropped ChatGPT, and the world lost its mind. Microsoft bolted it onto Bing. Google panic-released Bard, which immediately hallucinated its way into a $100-billion market-cap dip, and so on, to the present moment.
And once again, everyone thinks the machines are alive.
They are not. Inside the labs, it’s the same reality it’s always been: data pipelines, reinforcement learning from human feedback, model cards, fine-tuning. It’s data and software engineering at planetary scale.
Three Alienations
LLMs are infrastructure, not aliens. They’re autocomplete for thought: remarkably useful, entirely mechanical, and nowhere close to conscious.
The mythology persists because it sells GPUs. “AI” is better copy than “regularized nonlinear regression on token embeddings.”
Here’s what actually matters about this cycle: each iteration changes how humans relate to their own cognition.
Factories alienated our labor: the skill of the craftsperson disappeared into the machine, and we became button-pushers watching our own hands become obsolete. Social media alienated our attention: the organic flow of curiosity and connection was routed through algorithmic feeds designed to maximize engagement, not understanding.
Now “AI” threatens to alienate our cognition itself.
It’s still us doing the work. The humans labeling images in Nairobi and Manila for pennies. The content moderators at OpenAI reading descriptions of abuse and violence to make models “safe.” The users typing prompts that train the next generation for free. The chosen abstractions and interfaces hide our input behind a glossy pane, but we’re still the ones feeding the machine.
We can build interfaces that let us see our own thought patterns, coordinate reasoning with others, and externalize insight without losing track of where it came from. That’s the promise: tools that help people think, not tools that replace thinking with something that merely looks like it.
Let’s Talk About the Raw Material
If we stripped away the “AI” mythology and looked at the engineering reality, the implications should be catastrophic for the model builders. After all, if an LLM is just a statistical aggregation of human output – which it is – then it isn’t an “invention”, it’s a derivative work.
We aren’t witnessing the birth of a new species; we are witnessing the greatest enclosure of the commons in human history. They strip-mined our collective cognition, refined it into a subscription service, and sold it back to us as a psychotically accommodating feedback loop.
Every Reddit thread from 2015 to 2023 wasn’t just “content”, it was labor. Every debate about coding standards, every niche hobbyist guide, every relationship advice thread: that was the training data. If our intellectual property law had any equity, we’d have to admit that every shitposter and power-user on those forums rightfully holds a fractional copyright on the models trained on their words.
I’m sure that in the current world, Redditors signed away any and all such rights in Subsection N of Appendix Q in the 12th version of the Reddit Terms of Service. But no one clicking “agree” on those envisioned their words being used to train a model marketed largely to justify layoffs by convincing bosses that a chatbot can replace a department.
Who Benefits From the Confusion?
The mythology isn’t an accident. It serves specific interests.
For venture capitalists, “AI” justifies valuations that “next token prediction” doesn’t. For executives, it explains why they need to spend billions on new infrastructure. For cloud providers, it creates demand for compute that might otherwise look excessive. For Big Tech, it positions them as the builders of the future rather than the custodians of a slowly decaying present. For politicians and regulators, “AI” provides a convenient “Arms Race” narrative. It allows them to justify protectionism, surveillance, and massive subsidies without having to explain the technical details.
If it’s a “God,” it needs a High Priest; if it’s a “Weapon,” it needs a General. If it’s “The Perfect Worker”, it needs a Manager. None of those have much use for a simple tool.
For everyone else, the myth provides cover for labor practices that would look dystopian if described plainly. Thousands of people in developing countries spend their days labeling images, rating text, and flagging harmful content for a few dollars an hour. This isn’t artificial intelligence; it’s outsourced human intelligence, arbitraged through global wage disparities and hidden behind the veneer of automation.
We Live in the Banal Singularity
The irony is that these tools – LLMs, diffusion models, the whole zoo – could actually make us smarter together. They’re not alien intelligences; they’re mirrors polished to near-perfection. Used right, they expand the perimeter of what a small team, or even a single person, can think about.
I use them constantly: to reason, to draft, to test ideas against synthetic interlocutors. They are scaffolds for cognition. The miracle isn’t synthetic consciousness, it’s shared understanding made faster.
That only works if we understand what we’re using. When the mythology obscures the mechanism, we lose the ability to use these tools deliberately. We start believing our autocomplete is wise rather than statistical. We mistake fluency for understanding, coherence for truth.
This is how products get worse while claiming to get better. The old keyboard used simple models trained on your specific typing patterns that were fast, efficient, and personalized. The new one uses massive models trained on everyone’s data, deployed through cloud infrastructure, optimized for scenarios that have nothing to do with your thumbs. It’s “smarter” in the abstract and dumber for you specifically.
What Demystification Looks Like
The path forward isn’t to ban these tools or worship them. The path forward is to understand them.
Machine learning is engineering. It’s statistics at scale, pattern recognition that works because it fails in predictable ways, systems that can be tested and improved and deployed responsibly when we treat them as tools rather than oracles.
There are people doing this work right now. Anat Caspi’s Taskar Center uses ML not to automate navigation, but to build OpenSidewalks: maps that learn a city’s barriers so a person in a wheelchair can define their own path. Ted Underwood’s digital humanities lab uses models to scan centuries of literature, acting as a telescope for historians to spot patterns of gender and language too vast for a single human lifetime to read. These tools don't replace the navigator or the historian. They extend their range.
These aren’t the projects that make headlines. They don’t promise to revolutionize everything. They just work, quietly broadening specific horizons for specific people.
The Actual Alignment Problem
Factories abstracted our muscles. Feeds abstracted our attention. Now we’re abstracting cognition itself. The question, as always, is cui bono – who benefits?
We can build systems that enclose our minds. We can build black boxes that make decisions we can’t interrogate, interfaces that hide human labor behind synthetic personalities, tools that make us less capable of thinking for ourselves.
Or, we can build systems that let minds meet each other more clearly. We can build tools that help people coordinate understanding, externalize reasoning, and augment their thinking without losing track of where the machine ends and the human begins.
The choice isn’t between progress and stagnation, it’s between mythology and mechanism, between systems designed to impress and systems designed to serve.
Your phone’s keyboard knows which one it’s become. You feel it every time it betrays your thumbs. The question is whether we’ll demand better: not more “intelligent” tools, but more honest ones.
Because at the end of the day, at least at the moment, there is no artificial intelligence. There is only human intelligence, extracted, aggregated, and sold back to us at a premium. The choice isn’t whether to use this particular machine, but whether to let others use it to claim credit for our work.

