Butlers or Architects?
In a recent viral post, Matt Shumer declares dramatically that we've crossed an irreversible threshold. He asserts that the latest AI models now exercise independent judgment, that he simply gives an AI plain-English instructions, steps away for a few hours, and returns to a flawlessly finished product that surpasses his own capabilities. In the near future, he claims, AI will autonomously handle all knowledge work and even build the next generation of AI itself, leaving human creators completely blindsided by the exponential curve.
This was a depressing read. The dramatic tone lands well. And by extrapolating from progress in the last six years, it's hard to argue against what AI might achieve in the next six.
I forwarded this to a friend of mine, who had the misfortune of reading it before bed. He told me he had a nightmare about it, dreaming of himself as an Uber driver, completely displaced from his high-tech career.
Someone on Twitter had a come back: "The thing I don't get is: Claude Code is writing 100% of Claude's code now. But Anthropic has 100+ open dev positions on their jobs page?" Boris Cherny of Anthropic replied: "The reality is that someone has to prompt the Claudes, talk to customers, coordinate with other teams, and decide what to build next. Engineering is changing, and great engineers are more important than ever."
This is strongly reminiscent of the Shell Game podcast I wrote about recently. And it connects to my arguments in "Agentic AI and The Mythical Agent-Month" about the mathematical laws of scaling coordination. Throwing thousands of AI agents at a project does not magically bypass Brooks' Law. Agents can dramatically scale the volume of code generated, but they do not scale insight. Coordination complexity and verification bottlenecks remain firmly in place. Until you solve the epistemic gap of distributed knowledge, adding more agents simply produces a faster, more expensive way to generate merge conflicts. Design, at its core, is still very human.
Trung Phan's recent piece on how Docusign still employs 7,000 people in the age of AI provides useful context as well. Complex organizations don't dissolve overnight. Societal constructs, institutional inertia, regulatory frameworks, and the deeply human texture of business relationships all act as buffers. The world changes slower than the benchmarks suggest.
So we are nowhere near a fully autonomous AI that sweeps up all knowledge work and solves everything. When we step back, two ways of reading the situation come into view.
The first is that we are all becoming butlers for LLMs: priming the model, feeding it context in careful portions, adding constraints, nudging tone, coaxing the trajectory. Then stepping back to watch it cook. We do the setup and it does the real thing.
But as a perennial optimist, I think we are becoming architects. Deep work will not disappear, rather it will become the only work that matters. We get to design the blueprint, break down logic in high-level parts, set the vision, dictate strategy, and chart trajectory. We do the real thinking, and then we make the model grind.
In any case, this shift carries real danger. When we delegate execution, it becomes tempting to delegate thought gradually. LLMs make thinking feel optional. People were already reluctant to think; now they can bypass it entirely. It is unsettling to watch a statistical prediction machine stand in for reasoning. Humbling, too. Maybe we're not as special as we assumed.
This reminds me Ted Chiang's story "Catching Crumbs from the Table" where humanity is reduced to interpreting the outputs of a vastly superior intellect. Human scientists no longer produce breakthroughs themselves; they spend their careers reverse-engineering discoveries made by "metahumans". The tragedy is that humans are no longer the source of the insight, they are merely trying to explain metahumans' genius. The title captures the feeling really well. We're not at the table anymore. We're just gathering what falls from it.
Even if things come to that, I know I'll keep thinking, keep learning, keep striving to build things. As I reflected in an earlier post on finding one's true calling, this pursuit of knowledge and creation is my dharma. That basic human drive to understand things and build things is not something an LLM can automate away. This I believe.
I recently launched a free email newsletter for the blog. Subscribe here to get these essays delivered to your inbox, along with behind-the-scenes commentary and curated links on distributed systems, technology, and other curiosities.
Comments