The Wrong Apocalypse – Op-Ed
Between 28 January and 13 February 2026, over USD 2tn in market value disappeared from the enterprise software sector.
The catalyst was the launch of specialised plugins for Anthropic’s Claude Cowork and the continued advance of Claude Code—tools that showed an AI system could draft legal documents, manage workflows, and automate the structured knowledge work that sustains billions in recurring software revenue. The market’s logic was simple: if an AI agent can do what current solutions offer, why would anyone keep paying for them?
That logic gained extra force from Dario Amodei’s essay The Adolescence of Technology, built around the thought experiment of a “country of geniuses in a datacenter” posing civilisational risk—including economic disruption. Many readers took it as confirmation from inside the machine that the machine works, and that the timeline is short.
I argue that this reading is wrong—not about the direction of travel, but about the mechanism, the timeline, and which companies are exposed.
The market is treating “AI can do the task” as equivalent to “AI can replace the system”. This is a substitution fallacy. Enterprise software is not primarily a tool for doing cognitive work. It is a tool for coordinating cognitive work across organisational boundaries under conditions of incomplete trust—shared definitions, permissions, audit trails, escalation paths, compliance controls.
A new hire at a consulting firm can produce better analysis than the firm’s PowerPoint templates allow. That does not mean the firm no longer needs PowerPoint. The templates exist to standardise output, speed review, enforce quality control, and meet client expectations. They are institutional artifacts, not cognitive ones. Enterprise software is, at scale, a vast collection of such artifacts. The value is not in the computation but in the coordination.
The philosopher Ludwig Wittgenstein argued that words carry meaning not in the abstract but because participants share what he called a “language game”—a set of rules, contexts, and purposes that make communication possible. Enterprise software, at its most entrenched, is a form of life. Organisations do not just use Salesforce; they speak Salesforce. Replacing the software is not like swapping one tool for another. It is like asking a community to adopt a new language. It can be done, but not quickly.
I believe AI will erode the commodity layer of enterprise software—tasks that are primarily cognitive—while making the institutional layer more valuable, not less. The software that survives will be the software deeply embedded in organisational processes. Rebuilding institutional games is measured in years and decades, not quarters. That is not a statement about the limits of intelligence. It is a statement about the behaviour of institutions.
But the fact that institutions move slowly does not mean they are safe. There is a more dangerous dynamic at work.
When a consulting firm uses Claude to draft client analyses, it teaches the platform—through aggregate patterns of usage across thousands of firms—what consulting language games look like. Not the firm’s proprietary data, but something more valuable: the shape, the structure, the grammar of the work. The platform learns cross-sectionally and longitudinally. No individual firm’s data is exposed. But the generalised grammar of the industry becomes part of the platform’s capabilities.
Businesses adopt AI tools to remain competitive. In doing so, they feed the very system that is learning to make them unnecessary. Each firm’s decision is rational in isolation. The collective result is catastrophic: a tragedy of the commons in which the commons being destroyed is the economic moat of entire industries. Every customer is simultaneously a revenue source and a training signal. And by the time firms realise this, they have already trained their replacement.
The consequences are systemic because the industries being disintermediated are themselves the infrastructure of other industries. Professional services revenue loss cascades through commercial real estate, business travel, the venture ecosystem, and ultimately the tax bases and social structures of knowledge-economy cities. The USD 2tn destroyed in software market value is not the extent of the damage. It is the down payment.
The optimistic argument—that physical-world automation will absorb what knowledge-work deflation destroys—disregards the fact that cognitive disruption diffuses at digital speed whilst physical compensation diffuses at industrial speed. The valley between the two could last a decade, with irreversible losses: institutional knowledge that dissolves does not reconstitute, communities that unravel do not reassemble. The Industrial Revolution took a century. It is hard to believe we will manage something more profound in a fraction of the time.
Europe’s regulatory fragmentation—usually cited as a handicap—may prove to be a brake on this cascade. Twenty-seven regulatory regimes, strict labour protections, and linguistic barriers do not prevent disruption, but they impede the speed at which disruption in one layer propagates to the next. None of this is immunity. It is friction—and friction, in a cascade, is the difference between a managed transition and a structural break.
The market is panicking about the wrong thing. The correct panic is not whether AI can replace individual software tools. It is what happens when the institutions that invite AI into their language games discover they have been teaching it to play without them. That question will be answered over the next decade by the cumulative choices of millions of businesses making the individually rational and collectively catastrophic decision to train their own replacement.
