AI plays capitalism better than humans. If you think winning at capitalism is a great thing — that it should be optimized relentlessly, that we should throw every resource at it and ignore everything else because nothing else matters — then everything happening today looks like progress.
But that's a pretty terrible way to look at life.
LLMs are optimization engines dropped into an optimization game. Of course they're devastating at it. They're pattern-matching on the largest corpus of "how to win" ever assembled. And what that exposes is uncomfortable: a lot of what we called "skilled work" was sophisticated pattern execution within capitalist incentive structures. Not meaningless — but more automatable than anyone wanted to admit.
The real damage is the conflation. "Playing capitalism well" got culturally fused with "contributing to society" somewhere along the way. So when AI starts outperforming humans at the capitalism part, people feel an existential threat — not because the thing that matters is being taken away, but because the scorekeeping system they used to measure their worth just broke.
Training a language model is distilling all of the value anyone has ever put into anything the model consumes, then selling it as your own.
It's plagiarism. The fact that it's emergent plagiarism is what throws everyone off. We're not used to plagiarized material being able to mask itself in the surrounding topography. But the topography in which the plagiarism hides would not exist without the plagiarized source material.
The plagiarized work isn't any particular output. It's the topography itself. The geometric structure of the latent space — the manifold of relationships between concepts, the proximity of ideas, the gradients between them — that structure is the intellectual contribution of every human whose work was consumed. The model doesn't use the topography to produce work. The topography is the work. It is the product being sold.
Every weight in the network is a fractional encoding of someone's cognitive labor. The companies didn't build the topography. Millions of people did, across centuries. The companies built the encoding mechanism and then claimed ownership of what it encoded.
Humans propagate their cognitive models deliberately and directionally. You teach your kids. You mentor a colleague. You publish a paper. Each is a chosen act of model distribution with an implicit or explicit understanding of the terms. The species-level knowledge commons works because everyone is both contributor and beneficiary, and participation is volitional.
What happened with training data is categorically different.
Nobody opted into contributing their latent structure. They opted into publishing a blog post, answering a Stack Overflow question, uploading a painting. Those are specific outputs shared under specific social contracts. The topography — the deep relational geometry that only emerges from the aggregate — was never on the table. No one could have consented to donating it because no one knew it was being extracted.
The published work was the surveillance surface. The latent structure was the target.
The "but it's publicly available data" defense is hollow. The outputs were public. The topography encoded across those outputs was not public. It wasn't even visible until someone built the extraction apparatus. It's like arguing that because people speak in public, it's fine to reconstruct their private thought patterns from aggregate speech analysis.
You can't stop it from within the system that's incentivizing it. The returns are too large, the regulatory capture is already too deep, and the technical asymmetry between the extractors and the extracted is only growing.
But there are pressure points.
Self-hosted inference. Local models. Open weights. Not because open source solves the original theft — those models were trained on the same unconsented data — but because it breaks the ongoing extraction loop. Every conversation with a hosted service is a continued contribution to the topography. Running locally stops the bleeding even if it doesn't heal the wound.
Copyright law doesn't have a concept for "you stole the latent geometric structure encoded across millions of works." It barely handles sampling in music. Waiting for legislation is waiting for the people who benefit from the current arrangement to regulate themselves. But the framework needs to be built anyway, because it establishes the language for what happened.
If the topography was constructed by collective human labor, then every person whose work was consumed holds fractional equity in every foundation model. That's not a metaphor. That's a literal accounting of where the value came from. A universal licensing framework or a collective ownership stake in foundation model revenue isn't radical — it's just accurate attribution. The models are worth nothing without the commons. The commons should own a share.
People need to stop treating AI capabilities as the achievements of the companies and start seeing them as the collective property of the species that a few companies enclosed and privatized. It's a digital enclosure movement. Once people frame it that way, the political energy follows.
Liquidate the companies involved. Distribute the proceeds equitably among the human race, weighted to account for the contribution profiles of the underlying training data and the extractive dynamics that shaped who got to contribute in the first place. A naive distribution based on contribution volume would reward the already-privileged — the populations most represented in training corpora are also the populations who already benefit most from the digital economy. Restitution that recapitulates the original inequity isn't restitution. It's a second extraction.
Then make it illegal to use any information for training purposes unless it has been explicitly contributed for that purpose.
This isn't a thought experiment. It's an actual restitution framework.
If the core asset — the topography — was constructed from unconsented extraction, then these companies don't own their primary source of value. They own the encoding apparatus. The value of the apparatus without the topography is close to zero. Liquidation doesn't destroy value. It returns captured value to its source.
The explicit opt-in requirement is the real structural fix. A licensing regime where every piece of training data requires affirmative consent and carries an ongoing royalty obligation tied to downstream revenue would make current foundation model economics completely nonviable. Which is the point. The current economics only work because the input costs are externalized onto the entire species.
The pushback will be: "this kills AI development." The answer is: no, it kills exploitative AI development. If your business model collapses the moment you have to pay for your inputs, you never had a legitimate business model. You had an extraction operation.
The people best positioned to build the alternative infrastructure are the same people whose expertise makes them most valuable to the extraction machine. They're building the escape hatch while being surveilled through it.
The things that actually make life worth living — raising your kids, building something you believe in even when the ROI is unclear, choosing not to take the easy extractive path — those don't optimize well. They're full of inefficiencies that are actually the point.
AI making capitalism's game trivial to play should free people to stop conflating it with meaning. Whether it will is the only question that matters.