End of Productivity Theater

I remember the early 2010s as the golden age of productivity hacking. Lifehacker, 37signals, and their ilk were everywhere, and it felt like everyone was working on jury-rigging color-coded Moleskine task-trackers and web apps into the perfect Getting Things Done system.

So recently I found myself wondering: what happened to all that excitement? Did I just outgrow the productivity movement, or did the movement itself lose stream?

After poking around a bit, I think it's both. We collectively grew out of that phase, and productivity itself fundamentally changed.


The Trap of Micro-Optimizations

Back then, the underlying promise of productivity culture was about outputmaxxing (as we would now call it). We obsessed over efficiency at the margins: how to auto-sync this app with that one, or how to shave 5 seconds off an email reply. We accumulated systems, hacks, and integrations like collectors.

Eventually, the whole thing got exhausting. I think we all realized that tweaking task managers wasn't helping the bottom line. We were doing a lot of organizing, but that organizing wasn't reflecting in actually getting the work done.

The reason is simple: not all tasks matter equally. Making some tasks faster does not move the bottomline if the core task remains the serial bottleneck. Amdahl’s Law says that speeding up one part of a system improves overall performance only in proportion to the time that part consumes. If the hard, irreducible core is untouched, optimizations elsewhere are just noise. 

Painting the deck of a sinking ship faster doesn't help anyone. Productivity should be about making sure we are working on the right things in the first place. The main thing is to keep the main thing the main thing. 


Away From the Glowing Rectangle

For more than 15 years, I've relied on Emacs org-mode to run my life. It's the ultimate organization system, that has survived every software trend of the past decade and a half. But despite having this powerful writing system at my fingertips, my best ideas never arrive while I'm staring at a screen. Almost without exception, my hard thinking happens away from the screen. That's where the ideas come from.

If I'm being rational about it: I should be paid for the time I spend away from the screen thinking hard, not for the time I spend managing my inbox, or doing trivial office work, or wrangling text on a screen.

So that's how I try to work now. I do my deep thinking, messy brainstorming, and wrestling-with-ideas completely away from the screen. Then I plan my next 45 minutes or so (what I'm going to do, in what order, and why) and only then do I go to my laptop to execute it. In other words, I arrive at the screen with a plan.

(OK, let's first take a moment to appreciate my self-restraint for not mentioning AI until this late in to the post. But here it comes.)

What does productivity even mean in the age of AI? What are we actually here to contribute? Are we supposed to be architects or butlers to LLMs?

If AI absorbs all the shallow work, the only things left that genuinely require a human are the core parts that demands genuine creativity, judgment, taste, and the type of thinking that can't be prompted away. This raises the stakes considerably, and changes what "a productive day" even means.

That kind of deep creative work is best done away from the glowing rectangle.


I recently launched a free email newsletter for the blog. Subscribe here to get these essays delivered to your inbox, along with behind-the-scenes commentary. 

Comments

Popular posts from this blog

Hints for Distributed Systems Design

The F word

TLA+ modeling tips

Foundational distributed systems papers

Optimize for momentum

The Agentic Self: Parallels Between AI and Self-Improvement

Learning about distributed systems: where to start?

Advice to the young

Cloudspecs: Cloud Hardware Evolution Through the Looking Glass

My Time at MIT