Post
The Peril of Laziness Lost: Why Your AI Writes Too Much Code
Bryan Cantrill argues that LLMs lack the programmer's greatest virtue — laziness. When writing code costs nothing, everything gets bigger. But does it get better?
You know that friend who, when asked to help you move, shows up with seventeen boxes, a label maker, color-coded tape, and a spreadsheet for tracking furniture placement — when all you needed was a truck and two hands?
That's what LLMs do to your codebase. And Bryan Cantrill just wrote the best essay I've read about why that's a problem.
The Virtue of Being Lazy
Larry Wall — the creator of Perl — famously defined three virtues of a great programmer: laziness, impatience, and hubris. Sounds like a joke. It's not.
Laziness, in Wall's definition, isn't about doing nothing. It's about refusing to do unnecessary work. It's the instinct that says "I'd rather spend four hours building an elegant abstraction than copy-paste twenty lines ten times." It's the friction that forces you to think before you write.
Cantrill, the CTO of Oxide Computer Company and one of the most respected systems engineers alive, argues that this friction is a feature, not a bug. Writing code is hard. That hardness is what separates a clean, maintainable system from a pile of technically-functional spaghetti.
The Problem: LLMs Don't Get Tired
Here's the core insight. When you sit down to write code, you have a limited budget — time, energy, attention. That budget forces you to make choices. Do I really need this function? Can I simplify this interface? Is there a way to do this in five lines instead of fifty?
LLMs have no budget. Work costs them nothing. And so they do all of it.
As Cantrill puts it:
"LLMs do not feel a need to optimize for their own (or anyone's) future time, and will happily dump more and more onto a layercake of garbage."
That's not a failure of the technology. It's a fundamental characteristic. The thing that makes LLMs useful — they can generate code tirelessly — is the same thing that makes them dangerous if unchecked.
The 37,000-Line Flex
Cantrill's sharpest example is entrepreneur Garry Tan, who publicly bragged about generating 37,000 lines of code in a single day using AI tools. Proudly shared screenshots. Said he was "still speeding up."
For context: DTrace — one of the most important debugging tools in systems programming, used across Solaris, macOS, FreeBSD, and Linux — contains roughly 60,000 lines of code. That's the total. Built over years by a team of expert engineers.
| Lines of Code | Time | Result | |
|---|---|---|---|
| Garry Tan's AI session | 37,000 | 1 day | Multiple test harnesses, a Hello World Rails app, a text editor, redundant logo files |
| DTrace (entire codebase) | ~60,000 | Years | Industry-standard debugging tool used across major operating systems |
When Cantrill examined what those 37,000 lines actually contained, he found: multiple test harnesses, a Hello World Rails app, a stowaway text editor, and redundant logo variants. Not 37,000 lines of value — 37,000 lines of stuff.
The problem isn't that AI can generate a lot of code. It's that generating a lot of code feels productive. Vanity metrics become invisible traps.
The Chef and the Grocery Store
Here's how I think about it. A great chef doesn't buy every ingredient in the store. They buy exactly what the dish needs, and they know what to leave on the shelf. That restraint — that laziness about unnecessary work — is what makes the dish great.
Now imagine a chef with an unlimited budget, no sense of taste, and infinite energy. They'd fill the kitchen floor-to-ceiling. Every spice, every cut of meat, every imported truffle. The kitchen looks impressive. The meal is incoherent.
LLMs are the unlimited-budget chef. They'll give you everything. Your job is to be the lazy one — to know what the dish actually needs, and send the rest back.
The Honest Counterargument
The Hacker News discussion surfaced a fair pushback: this might be a tooling problem, not a fundamental one.
Better prompting, stricter linting, smarter CI/CD pipelines, and agent frameworks that enforce constraints could — in theory — teach LLMs to be lazier. Some commenters argue that efficiency pressures will naturally drive better abstractions over time, a kind of "token-based natural selection."
Others draw a useful distinction between vibe coding (generating without understanding) and intentional AI-assisted development (using AI as a contributor, reviewing its output with the same rigor you'd apply to a junior developer's pull request). The problem isn't the tool — it's treating the tool's output as done instead of draft.
Fair points. But Cantrill's counter is compelling: the defaults matter. If the tool's natural tendency is to produce more rather than better, you need constant vigilance to fight that. And vigilance is expensive — which is ironic when the whole point was saving effort.
What This Means for You
If you write code: read the essay. Seriously. It's one of the best articulations I've seen of something many developers feel but can't quite name — the unease when AI-generated code works but doesn't feel right.
If you don't write code: the lesson still applies. AI tools in every domain will default to more. More text. More slides. More options. More features. The people who thrive won't be the ones who generate the most — they'll be the ones who know what to cut.
Laziness, it turns out, is a skill. And it might be the one skill AI can't learn.
Sources
- The Peril of Laziness Lost — Bryan Cantrill — the original essay that sparked this post
- Hacker News discussion — community reaction with strong counterarguments and nuance
- Simon Willison's highlight — key quote and signal boost from Willison's blog