The Consensus Bottleneck: Why AI Won't Automate Organizations as Fast as It Automates Code
A common theme in discussions about AI and productivity is what happens after we’ve automated coding. One version of the story goes: once coding is automated, everything else follows, and productivity explodes—or, alternatively, labor’s share at technology companies collapses and we face an economic apocalypse.
This is a plausible story. But I think there will be substantial barriers to transforming firms into little more than automated coding systems plus a CEO. The reason is that much of the work done in large organizations isn’t actually producing code, manufacturing products, or whatever else we think of as “true work.” A lot of the work is people meeting with each other.
What Meetings Are Actually For
Why do employees of organizations spend so much time in meetings? When we introspect, the answer becomes clear. Meetings exist to create decisions. Multiple actors with stakes in a situation gather context, exercise judgment about that context, and come to agreement about what to do next. That process—unfortunately or fortunately for human labor—currently happens in human brains, not AI systems.
Even if AI wrote all the software at a large company, humans would still meet to decide what that software should do. They’d still make decisions about marketing budgets, compensation, strategic direction, partnerships, and countless other matters beyond product development.
The Limits of AI Judgment
Technologists will naturally respond that there’s nothing special about human judgment. AI can make these judgments, or multiple AI agents can converse to reach decisions—perhaps better ones—enabling fully autonomous firms. I don't think anything deep prevents this in principle.
But there’s a crucial constraint: existing organizations are meant to serve human preferences. When firms decide how to produce something, they’re ultimately serving the owners, who are human. In a more indirect way, they are also serving the preferences of customers, who are also human at some point down the supply chain. Until AI can literally read minds or predict human wants with very high accuracy, humans will remain essential to decision-making at some point.
Firms as Political Structures
Even setting aside the question of whether AI could replace human judgment, there’s a separate question of whether existing firms will allow it. Firms are political structures with power centers and veto players. Decisions can’t be made unilaterally. To launch a new product, change an existing one, or even swap out a model powering a feature, many people must be involved. As long as those people remain employed in those positions, they must participate in meetings, read the documents, and establish common knowledge that everyone is aligned.
This consensus culture may produce better decisions—more minds, more constituencies, more concerns addressed. But it dramatically slows everything down. Code that could be written and shipped in a day might still take months to actually deploy.
The Path Forward: New Firms
It’s hard to be optimistic that existing large firms will successfully shed this consensus culture. Instead, I expect many economic functions will be taken over by new firms—firms organized from the start to minimize human consensus as a bottleneck. These firms will use speed to outmaneuver existing larger firms in many markets for reasons John Boyd captured in the OODA loop.
There are a variety of ways in which these new firms may be structured. For example, managers might be represented by AI agents in meetings, employees might be replaced by agents altogether, or individual managers might have more unilateral decision rights rather than requiring broad alignment. We’ll see many such firms emerge, and as with any process of creative destruction, equilibrium will reveal which organizational forms survive.
But it’s worth keeping these basic forces in mind: the bottleneck to AI-driven productivity at the moment isn’t writing the code. It’s getting humans to agree on what to do with it.



I’d say that some meetings are about decisions but more frequently they’re about sharing context. AI should be able to help a lot with these - if everyone has an AI assistant and they all communicate with each other, suddenly each person has access to all of the context of everyone at the company without sitting in the weekly review meeting.
But otherwise I agree with all of this. There will always be some decisions that are subjective, and humans will make those. But there are also decisions that will be better left to AI that humans will insist on making for reasons of politics and power and the like.