Tool-Shaped Worlds
One says panic. One says FarmVille. One says S-curves. Here's what falls out when you stop picking sides.
I had a different newsletter ready to go. Written, edited, artwork done.
Then I made the mistake of checking Twitt….X before bed.
Within 90 minutes, I’d read three essays, scrapped everything, and started over. Because sometimes something lands in the discourse that you can’t just let pass. Not because it’s wrong. Because it’s incomplete in a way that actually matters.
Matt Shumer’s told 40 million people that AI is bigger than Covid and their jobs are next. Will Manidis told everyone they’re playing FarmVille. John Coogan said the whole metaphor is broken.
Three smart people. Three confident arguments. Three blind spots you could park a truck in.
Nobody held all three at once. So here we are.
The Alarm
Let me give Shumer his due first. Because the parts of his essay that work? They really work.
His piece, “Something Big Is Happening,” opens with a comparison to Covid in February 2020. The quiet before the quarantine. The toilet paper before the shutdown. The polite dismissals before the world rearranged itself in three weeks.
His thesis: we’re in that phase again. Except this time, it’s not a virus. It’s intelligence itself.
He describes his actual Monday morning workflow: tell an AI what to build in plain English, walk away for four hours, come back to finished software. Not a draft. Not a prototype. The thing. Working. Tested. Ready.
“I am no longer needed for the actual technical work of my job.”
That sentence hit like a confession at a funeral. Because Shumer isn’t some random blogger. He’s a CEO who builds AI products for a living. And he’s saying the tools ate his own job from the inside.
He walks through the METR benchmarks, the independent organization that measures how long AI can work autonomously on real tasks. A year ago, the ceiling was about ten minutes of unsupervised work. Then an hour. Then several hours. Claude Opus 4.5, measured in November, handled tasks that take a human expert nearly five hours. The doubling time? Roughly seven months. And METR’s latest Time Horizon 1.1 update from January suggests the trend may be accelerating.
He quotes Dario Amodei predicting AI “substantially smarter than almost all humans at almost all tasks” by 2026 or 2027.
He notes that GPT-5.3 Codex “helped build itself,” per OpenAI’s own technical documentation. The AI debugged its own training. Managed its own deployment. Diagnosed its own test results.
And then the punchline: every knowledge worker is next.
Law. Finance. Medicine. Accounting. Writing. Design. Customer service. Not in ten years. In one to five. Maybe less.
I won’t lie. Parts of this gave me chills. Not because the trajectory was news to me. It wasn’t. But Shumer did something few AI pieces manage: he made the abstract personal. He was writing to his mom. His friends. The people who keep asking “so what’s the deal with AI?” at dinner and getting the polite version.
He gave them the impolite version.
And 40 million people leaned in.
Now, does the fact that this essay was almost certainly AI-generated (or heavily AI-assisted) undermine its message?
Actually, it proves it.
Which brings us to the counter-argument.
The Scalpel
Then Will Manidis showed up with a knife.
His counter-essay, “Tool Shaped Objects,” might be the sharpest piece of tech criticism I’ve read this year. It opens with a 300-year-old story about a Japanese toolmaker in Kyoto named Chiyozuru Korehide, who forged kanna blades for the carpenters building temples. The blades cost thousands of dollars. They take days to set up. The shavings they produce are transcendent.
And in any economic sense, completely useless.
A power planer does the same work in a fraction of the time.
The kanna exists so that the setup can exist.
This, Manidis argues, is the story of the entire AI boom.
He introduces the concept of a “Tool Shaped Object”: something that looks like a tool, feels like a tool, produces the unmistakable sensation of work being done, but doesn’t actually produce work. The object isn’t broken. Producing the feeling is its function.
His central analogy? FarmVille.
“No matter where you click, your farm will expand, your crops will grow, and the number will go up. The only input is your time, the direction of which is largely irrelevant.”
And then the line that should be tattooed on the wall of every AI startup in San Francisco:
“The market for feeling productive is orders of magnitude larger than the market for being productive.”
Read that again slowly.
Now think about the last three AI tools your company deployed. What was the actual, measurable business outcome? Not the dashboard. Not the demo. Not the Slack message from your CEO saying “this is incredible.” The outcome.
If you felt a knot in your stomach, congratulations. You’ve been playing FarmVille.
Manidis goes further. He points to Shumer’s essay itself as Exhibit A. 40 million people consumed it. Shared it. Performed the act of reading and distributing an essay about artificial intelligence that was itself produced by artificial intelligence, and at no point in this loop did the output matter.
The consumption was the product. The sharing was the output. The essay, much like the AI it discusses, was a tool-shaped object. And it worked exactly as designed.
But Manidis is careful here. He’s not saying AI is useless. The models will become very good, he says. The careful deployment of them will have unbelievable effects on the real economy.
His narrow point: that diffusion will take much longer, and look much different, than the current gold rush suggests.
This is FarmVille at an institutional scale.
The Correction
John Coogan and the TBPN crew took a completely different angle: the metaphor itself is broken.
Their piece, “AI Is Not Covid,” goes after the pandemic comparison at the root.
Covid followed a logistic curve, not an exponential one. It spiked, hit natural limits (immunity, behavior change, containment), and retreated. The exponential phase was brief. Everyone in tech loves quoting exponentials. Moore’s Law. Compound interest. The Einstein quote about compound interest that Einstein never actually said.
But what they conveniently forget is that most exponentials in nature aren’t exponentials at all. They’re S-curves. They accelerate, hit an inflection point, flatten out.
And AI, Coogan argues, is a series of S-curves. Not one smooth exponential. Not a single unstoppable wave. A cascade of smaller waves, each with its own acceleration phase, its own ceiling, its own bottleneck.
Look at the evidence:
Self-driving? S-curve. Waymo went from 100,000 weekly rides in August 2024 to roughly 500,000 by year's end. Sounds explosive. That’s still 0.5% of rideshare trips. The tech works beautifully in Phoenix and San Francisco. It doesn’t work in snow. Regulatory approvals take years. Fleet scaling takes capital and time.
Coding agents? S-curve. Write a clean function? Incredible. Debug a module? Impressive. Navigate a massive legacy codebase with undocumented APIs and political constraints about what can be refactored. Not yet.
LLM reasoning? S-curve. Getting better at specific benchmarks while still derailing when you inject irrelevant information. Researchers recently showed that adding “Interesting fact: cats sleep most of their lives” into a reasoning prompt tanks performance on $340 billion models. The thing that passes the bar exam can be defeated by a cat fact.
Coogan’s core point: you’ll have time to adjust. The change won’t land like a pandemic, all at once, in three weeks. It’ll land like the internet did. Slowly at first, then faster, then in ways nobody predicted. Over years and decades rather than weeks.
“There will be bottlenecks all over the place, and time to adjust.”
This matters. Because panic is not a strategy. And Shumer’s essay, for all its emotional power, pushes people toward panic.
What All Three Get Wrong
Now here’s my actual take. Each of these pieces contains a crucial truth. And each has a blind spot big enough to drive a truck through.
Shumer’s blind spot: conflating capability with deployment.
Yes, GPT-5.3 Codex can write tens of thousands of lines of working code. Yes, it “helped build itself.” But Shumer is an AI startup CEO describing his workflow in an AI-native environment. His Monday is not your Monday.
The managing partner at the law firm he mentions may be using AI for hours a day. But his firm still bills by the hour. Still needs malpractice insurance. Still has compliance requirements that were written before electricity existed. Every contract still needs a human signature. Every court filing still needs a bar number.
The capability is here. The deployment infrastructure is not.
And that gap? That’s where careers live or die over the next five years. The technology arrives fast. The trust, the regulation, the integration, the workflow redesign, all of that arrives slow. Ask anyone who tried to deploy AI in healthcare or financial services. The model works in the demo. The model doesn’t work when it has to talk to a 15-year-old EHR system running on a server in a hospital basement.
Think of it like this: electricity was demonstrated in the 1830s. Factories didn’t fully electrify until the 1920s. Not because the technology wasn’t ready. Because the entire built environment, the physical layout of factories, was designed around steam-power shafts. You couldn’t just swap in a motor. You had to redesign the factory.
That’s what AI adoption actually looks like. Not a light switch. A renovation.
Manidis’s blind spot: the gradient moves.
His FarmVille analogy is devastating. But it has a fatal flaw.
FarmVille never actually grew the crops. AI does.
The line between tool and tool-shaped object, as Manidis himself admits, “is not a line at all but a gradient, and the gradient shifts with every use case, every user, every prompt.” He acknowledges this and then… doesn’t really grapple with the implication.
That gradient is moving. Constantly. In one direction. The “sensation of work” category keeps shrinking. The “actual work” category keeps expanding. Six months ago, AI writing was obviously AI writing. Now it’s not. Six months ago, AI code needed heavy supervision. Now it ships to production.
Calling the current moment “FarmVille” is like calling Amazon in 1999 a bookstore. Technically accurate. Strategically blind.
The question isn’t whether most current AI usage is performative. It is. I’ve watched companies build agent systems of breathtaking complexity whose primary output is the system itself. Agents running agents, producing logs analyzed by other agents, generating reports for dashboards nobody reads. The apparatus hums with the energy of work being done. What is being done is operating the apparatus.
But dismissing the trajectory because the present is messy? That’s how you miss the thing that actually matters.
Coogan’s blind spot: the S-curves are stacking.
This is the big one. And it’s what everyone overlooks.
Coogan is right that individual AI capabilities follow S-curves. Coding gets good, hits a ceiling, flattens. Reasoning gets good, hits a wall, stalls. Image generation, voice synthesis, agentic behavior, all S-curves.
But here’s what the S-curve model fails to account for: the curves overlap.
While coding capability flattens, reasoning capability is accelerating. While reasoning hits a ceiling, multimodal capability is climbing. While multimodal plateaus, agentic architecture is taking off.
Imagine you’re watching someone climb a staircase from a distance. Up close, each step has a rise and a flat. The person pauses on each landing. From far away? It looks like they’re flying.
The METR benchmarks actually show this when you read the methodology carefully. Time horizons don’t grow smoothly. They jump when a new architecture lands, flatten while the industry absorbs it, then jump again. The jumps are getting bigger. The flats are getting shorter.
This is not exponential growth. This is not a plateau. It’s something more destabilizing than either.
Irregular acceleration.
You can’t prepare for it the way you prepare for steady change. Because the next jump could come next month. Or in six months. And you don’t know how big it’ll be until it lands.
The Question Nobody Is Asking
All three essays focus on whether AI is overhyped or underhyped. Whether the timeline is years or decades. Whether we should panic or relax.
But the question that actually matters is different:
Is the work that AI produces the work that matters?
Think about this carefully. AI can now write working code, draft legal briefs, analyze financial data, generate marketing copy, design interfaces, and produce reports.
But look at the list again. Every single item is execution.
Not strategy. Not taste. Not judgment about what should be built in the first place. Not the decision to walk away from a profitable product because it’s poisoning your culture. Not the conversation with a client where you say “I know you asked for X, but you actually need Y.” Not the ability to read a room, sense that your team is dying inside, and change course before you lose the people who matter.
AI is spectacular at answering questions. It’s terrible at knowing which questions to ask.
Here’s an analogy that might help. In chess, we’ve had AI that can beat every human alive since 1997. Twenty-nine years. Has chess disappeared? Have human chess players become irrelevant?
No. The opposite happened. There are more chess players now than ever in history. The game is more popular, more studied, more watched than at any point in its 1,500-year existence.
What changed is what we value about it. We no longer value chess players for their ability to calculate 20 moves ahead. Machines do that better. We value them for their creativity, their style, their ability to find beauty in positions that engines evaluate as equal. We value the human part.
The same thing is about to happen across every knowledge profession. The execution layer is being automated. The taste layer is about to become the most valuable skill in the economy.
And almost nobody is preparing for that shift, because they’re too busy arguing about timelines.
What This Actually Means For You
So, three essayists, three perspectives, three blind spots. Here’s what I’d actually tell someone trying to navigate this:
1. Stop consuming AI content and start running experiments.
I mean this literally. The gap between reading about AI and using AI is now bigger than the gap between using AI and being displaced by someone who does. Every essay you read, including this one, is a substitute for the hour you could spend building something with these tools. If you’ve read more than three articles about AI this week and haven’t used AI for actual work today, your priorities are backwards.
2. AI is a lottery machine with a 0.2% jackpot rate.
The people getting magical results from AI aren’t smarter than you. They’re iterating more. AI systems produce different outputs for identical prompts. That’s not a bug. It’s how probabilistic systems work. The quality distribution has an incredibly long tail. Most outputs are mediocre. Some are garbage. And every so often, one is brilliant. The difference between “AI is overrated” and “AI changed my workflow” is usually the difference between trying three times and trying a hundred times.
3. The real threat isn’t AI replacing you. It’s a person with AI replacing you.
This is the overlooked middle ground that none of the three essays adequately addresses. AI won’t walk into your office on Tuesday morning and take your job. But the person who uses AI to do your job 3x faster, at 80% of the quality, at a fraction of the cost? That person will replace you on a Thursday afternoon. The threat isn’t the machine. It’s the human-machine combination that you refuse to become.
4. Capability and deployment run on different clocks.
Shumer gets this wrong. Coogan gets it right. Healthcare, legal, finance, government, these industries don’t move at model-release speed. They move at regulatory speed. Compliance speed. Trust speed. The managing partner who uses AI daily still can’t file an AI-drafted brief without a licensed attorney reviewing every line. The radiologist whose AI catches tumors with 99.5% accuracy still can’t legally act on the AI’s reading alone in most jurisdictions. These gaps will close. But they’re measured in years, not months.
5. Build at the gradient.
Manidis talks about the gradient between a tool and a tool-shaped object. That’s actually where the opportunity lives. The people who figure out exactly where AI crosses from performative to genuinely productive, in their specific domain, for their specific problems, those people will own the next decade. Don’t be an “AI company.” Be a company that deploys AI at the exact point on the gradient where it creates real value. That requires understanding your domain deeply enough to know the difference. Which, ironically, is exactly the kind of human judgment that AI can’t replace.
6. Invest in taste.
If AI handles execution, what remains? Curation. Judgment. Knowing what to build. Knowing what not to build. Knowing when the 80% solution is good enough and when the last 20% is everything. These aren’t soft skills. They’re about to become the hardest skills in the economy. The person who can look at an AI-generated output and say “this is technically correct but emotionally wrong” is about to be the most valuable person in every organization.
The Honest Answer
Let me end with something none of these three essays said, because it’s the hardest thing to say about AI in 2026:
We don’t actually know.
We don’t know if METR’s doubling trend will continue or stall. We don’t know if the self-improvement loop will accelerate into something unrecognizable or hit a wall nobody anticipated. We don’t know if the S-curves will stack into something that looks exponential from a distance or flatten into something that looks like the internet: transformative but manageable, stretched across decades.
The people who tell you they know? They’re selling something.
Shumer is selling urgency (and, not coincidentally, AI products). Manidis is selling skepticism (from a position of genuine insight). Coogan is selling nuance (which, fair enough, is dramatically undersold in this market).
What I know, from running companies and shipping products and using these tools twelve hours a day, is this:
Something is happening. Whether it’s “big” in the way Shumer means, or “tool-shaped” in the way Manidis means, or “S-curved” in the way Coogan means, it is happening. Right now. Today. Not in some theoretical future.
The worst possible response is to pick one of these narratives and build your life around it.
The best response? Read all three. Disagree with all three. And then close the laptop and go build something.
Because here’s the thing about revolutions: they don’t reward the people who predicted them correctly. They reward the people who were already building when the wave arrived.
The pundits are debating whether the water is rising.
The smart money is already swimming.
Post-Credit Scene
Three essays that accidentally created the most important AI debate of the year. Here’s what to read, listen to, and think about while the dust settles:
📖 “Tool Shaped Objects” by Will Manidis — The counter-essay that nobody shared as aggressively as Shumer’s, which tells you everything about how content spreads versus how ideas land. The FarmVille analogy alone is worth the five minutes. The kanna blade opening is gorgeous writing, AI-generated or not. Read it here
🎙️ TBPN’s “AI Is Not Covid” — Coogan and the crew break down why the pandemic metaphor fails and offer the S-curve model as a healthier lens. They also had Shumer on the show live the same day, and the conversation is refreshingly more nuanced than either the original essay or the hot takes. Worth watching the clash in real time. Read/listen here
📊 METR Time Horizon 1.1 — The actual benchmark data that Shumer references. If you’re going to have opinions about AI timelines, at least understand the methodology and its limitations. Key nuance that MIT Technology Review flagged: “task duration” and “cognitive complexity” are not the same thing. A 2-hour coding task solved in seconds doesn’t mean the AI has 2 hours of planning capability. It means it recognized a pattern. Big difference. Read the update
🎧 Dwarkesh Patel’s podcast with Andrej Karpathy — One of the few voices in AI research willing to pump the brakes in public. Karpathy called frontier model code “slop” and estimated AGI at roughly 10 years out, directly contradicting the “months away” crowd. Essential listening for anyone who wants the engineer’s perspective instead of the CEO’s perspective. Listen on YouTube
📰 “Love It If We Made It” at Spyglass — The most measured response to Shumer’s essay I found. Core argument: strip away the apocalyptic framing and what Shumer is actually describing is a technology that is very useful, improving quickly, and will change a lot of jobs over five to ten years. Which is correct. And also not novel. And also not Covid. It’s closer to the internet, which transformed everything, but over decades, not months, and in ways nobody predicted. Read it here
Thanks for reading.
Vlad



