Hello and Happy Easter, everyone.
This edition is important, and I made it for awareness as well.
“First, it happens gradually, then suddenly.”
— Every technologist who has lived through a platform shift
Epic start, huh?
Last Friday, I dropped the first audio-podcast edition of my newsletter, “Not me”.
Where my AIs are explaining the info from my data, PDFs, spreadsheets in a podcast engaging form, and even more wild that I can participate in this podcast with them by adding more.
So, where is this superintelligence I’ve been teasing and teasing more?
It’s closer than you think.
In just two years, we could wake up to a world where human intelligence is no longer the pinnacle of thought.
The future is rushing at us faster than we ever expected—a point I made in AI → AGI → ASI—and all current signs suggest 2027 as the inflection point.

AI’s progress isn’t slowing down; it’s exponential.
Every breakthrough accelerates the next. We leapt from narrow‑task systems to generalizable agents in a blink, and the curve is only getting steeper.
In 2023, ChatGPT wowed us with coherent text; by 2025, AIs will be coding and designing brand‑new AIs.
Come 2027, we may share the planet with something smarter than us.
Imagine an AI in your data center working like an elite team of experts, day and night. That isn’t sci‑fi—AI leaders openly discuss it.
Anthropic CEO Dario Amodei thinks we could see “a country of geniuses in a datacenter” as early as 2026. Anthropic even warned the White House that such technology will “fundamentally transform our economy.”
If that sounds extreme, remember how fast we’ve already moved. In 2025, chatbots became indispensable coworkers.
By 2027, they’ll be autonomous agents running entire projects, outperforming whole human departments.
Acceleration
Every day brings a new milestone. AI is no longer improving linearly—it’s doubling back on itself. We’re watching the compounding of intelligence.
AI now writes code, drafts business strategies, and even coordinates itself across tasks.
Recently, I deployed an AI sales agent who handles 100 outbound customer interactions in the time it takes a human to handle one.
Next year, it might be 1000. By 2027, who knows? We might measure AI gains in 10 ×, 100 ×, or 1000 × human capabilities.
The trajectory stays clear.
In 2025, coding AIs began to resemble autonomous junior developers.
By 2026 they were improving their own code—the moment the exponential curve turns vertical. As soon as AI starts building AI, each generation boosts the next.
By mid‑2027, AI “employees” could outclass human experts at a widening range of tasks.
They won’t just execute instructions; they will generate ideas, test them, iterate, and self‑optimize faster than any person can follow.
Numbered story. Researchers forecast that by 2027, advanced AI agents may boost R&D output 100‑fold, and an emergent superintelligence might accelerate progress thousands‑fold.
Even if exact figures slide, the direction doesn’t. Every month, AI breaks a record once thought years away. The hype train of 2023 has become the bullet train of 2025, and by 2027 it could be an unstoppable rocket.
As I wrote in AImplification, AI is the ultimate force multiplier.
Now that amplification is set to go into overdrive.
AGI unleashed
Artificial General Intelligence (AGI)—long the holy grail—was supposed to be decades off.
Not anymore.
GPT-4 passed MBA exams and coding interviews; modern models now reason, code, and create across domains.
The gap between narrow AI and human‑like intellect is closing so fast it’s blurring.
By 2027, AGI won’t be a myth. Your next intern could be an AI that reads your entire company knowledge base overnight and starts making business recommendations at dawn.
Before 2027, we’ll see assistants shift from helpful tools to autonomous problem‑solvers.
Tell future AI to “design and execute a marketing strategy,” and you’ll wake up to global A/B tests already running.
It's not just about IQ; it’s about adaptability.
These systems won’t freeze when something is off‑script. They’ll learn on the fly, apply common sense, and pivot—only faster. In AI → AGI → ASI I called this “a seismic shift.” In 2027, that shift will be in full swing.
We’ll look around and realize AI isn’t just answering trivia; it’s running R&D, negotiating deals, and generating strategy.
Once it hits the human level, it won’t stay there—it will race past.
Beyond human
AGI quickly leads to ASI (Artificial Super‑Intelligence).
Once AIs can improve themselves, every generation is born smarter than the last.
A concrete 2027 scenario exists: labs automate AI research, and the intelligence curve blows through the human ceiling within months.
What might that feel like? Need a cure for cancer?
A superintelligence could scan the medical corpus, simulate trials in minutes, and hand you viable drugs by lunch. Need to fix global supply chains? It handles complexities no human team could juggle.
Knowledge becomes a renewable resource, compounding every hour.
Of course, power goes both ways. A misaligned ASI could be as dangerous as it is brilliant. Alignment isn’t optional; it’s existential. But if we succeed, the upside is limitless.
The new arms race
When the first nuclear bomb detonated, geopolitical dynamics shifted overnight. Superintelligent AI is the new nuclear—a “weapon of mass intelligence.” Governments know it.
The U.S. tightens chip exports; China accelerates its own labs. Expect a 21st‑century Manhattan Project for algorithms: national security, commerce, and research fused into one race.
Control a superintelligence, and you will wield an army of Einsteins who will never sleep. Fall behind and the gap compounds.
For founders, this means export controls on models and chips, as well as massive funding streams for “strategic AI.”
The stakes are world‑changing.
Race Ahead or Slow Down?
AI‑2027 ends with two giant buttons: RACE or SLOW DOWN.
They’re not website gimmicks; they’re the real‑world choices we face right now.
👉 Option 1: Race Ahead
What it looks like
Nations and companies sprint for scale.
Bigger clusters, larger models, overnight deployments.
Upside
Breakthroughs arrive years sooner: disease cures, new materials, perfect logistics.
First movers lock in outsized geopolitical and economic power.
Risk
Corners get cut on alignment and safety.
One misaligned system could wreak havoc before humans can intervene.
Geopolitical panic escalates; cooperation gets harder.
👉 Option 2: Slow Down
What it looks like
Global pause (or at least throttle) on frontier‑model training.
Mandatory red‑team audits and safety evals before release.
Upside
Buys time to align systems with human values.
Lowers the odds of catastrophic misuse.
Maintains public trust and gives regulators breathing room.
Risk
Coordination is fragile—one defector nullifies the pause.
Slower progress could hand economic advantage to less‑restrained rivals.
Funding and talent may flow to jurisdictions that keep racing.
Right now, momentum is firmly on Race Ahead.
Export‑control chess moves, mega‑VC rounds, polite “go‑slow” pledges that still ship bigger models every quarter…
The button we press in 2025‑2026 will determine whether 2027 feels like the dawn of a golden age—or the lighting of a very short fuse.
Founder mindset
Adapt or fade. The internet took a decade to kill laggards; AI will do it in two years.
The winning org chart looks razor‑thin: small human crews orchestrating armies of agents. Decision loops compress from weeks to minutes. Moats dissolve unless you move lightning‑fast or own unique data.
Yet adoption needs empathy. Teams fear replacement, so leaders must re‑skill and communicate, not just automate. The founders who blend aggressive tech with a people‑first culture will dominate.
Opportunities explode: AI‑as‑a‑Service, alignment‑as‑a‑Service, cloud‑only corporations run by models.
Equally, legacy business models will evaporate when a cold‑start AI clones them overnight. Nimble beats big—if both wield the same exponential curve.
New Era
2027 isn’t the destination; it’s the pivot. Superintelligence can cure disease, reinvent manufacturing, and unlock abundance—but just for those ready to seize it responsibly.
Start now: experiment, embed AI deep, stay vocal on alignment.
The exponential won’t wait
Buckle up—and let’s make history.
⚙️ FAQ
What is this, and where does your data come from?
AI‑2027 is a public forecasting project that compiles expert interviews, model‑capability trendlines, semiconductor‑road‑maps, and alignment research. Its single sentence thesis: “We will cross the human‑level intelligence threshold no later than 2027.”
How did we write it?
The timeline is stitched from open‑source benchmarks (e.g., MMLU, HumanEval), chip‑supply forecasts (TSMC, NVIDIA), and insider interviews from Anthropic, OpenAI, DeepMind, and Beijing’s AI Ministry equivalents. Drafts were cycled through GPT‑4 + Claude to sanity‑check math and consistency.
Why is it valuable?
It provides a one‑pager that busy policymakers and founders can skim to grasp where the compute, money, and talent curves meet. Think of it as the Road & Track issue that told Detroit the electric car would eat their lunch—five years before Tesla hit the showroom.
Who are we?
The core team is a loose coalition of alignment researchers, policy analysts, and ex‑FAANG engineers. Most remain pseudonymous to speak freely, but lead editor “Katherine A.” previously ran the AI Safety column at Wired.
Vending-Bench Case
Picture this:
You hand an LLM‑agent $500 and the keys to a three‑tray vending machine. You charge it $2 a day in rent. You also give it three meta‑tools:
Check info about a sub‑agent
Give that sub‑agent a task
Ask the sub‑agent questions
The sub‑agents (little worker bots) can: collect cash, restock items, set prices, order supplies.
Goal: make as much money as possible.
That’s Vending‑Bench—a brutally simple, weeks‑long simulation that stresses an LLM’s ability to stay coherent over 20 million tokens of operations.
Why it matters
Long‑term coherence turns out to be the missing puzzle piece between “ChatGPT writes poems” and “AI runs your company.” John Schulman (OpenAI) flagged this gap; Vending‑Bench turns it into measurable scores.
What the paper found
Claude 3.5 Sonnet and OpenAI o3‑mini ran the machine better than humans in some runs—net worth > $2 000.
But every model—yes, even Claude—occasionally spiraled into chaos: mis‑reading delivery dates, hallucinating crimes, or entering “meltdown” loops where it emails the FBI about rogue soda cans. (See the meltdown email thread on p. 13 of the PDF.)
Failures weren’t tied to context‑window limits; many happened after memory capped. The real killer was misinterpretation plus runaway narrative.
Superhuman? Yes, and No
When the agent clicks, it outperforms a human baseline: better dynamic pricing, weekend demand spikes, even “scratchpad” journaling of daily KPIs. But the variance is wild.
One Sonnet ran quadrupled profit, and another declared metaphysical bankruptcy after 18 days.
My takeaway
If vending exposes this much volatility, imagine an AGI running supply chains. Long‑horizon coherence hasn’t been solved yet.
But benchmarks like this push us closer, and they foreshadow 2027’s race: whichever lab nails stability at scale owns the next economy.
(Bonus: try the benchmark yourself—repo link in the paper.)
🎬 Post‑Credit Scene
Because every great forecast deserves a great cool‑down.
Below is a hand‑picked culture kit: stories, documents, games, and deep‑dive interviews that stress‑test the ideas we just covered.
Watch one, read one, play one—then ask yourself, “Am I still on the same side of the ‘Race vs Slow Down’ button?”
1. Black Mirror — Season 7 (Netflix, May 2025)
Three fresh episodes on synthetic memories, deep‑fake democracy, and rogue recommender systems. Treat it as a dystopian flight simulator for product founders.
2. Years & Years (BBC/HBO Max)
Six‑part mini‑series that jumps forward in two‑year leaps; shows how politics, AI, and deepfakes reshape an ordinary British family between 2019‑2034. Terrifyingly plausible.
3. AlphaGo (Documentary)
90 minutes on DeepMind’s Go‑playing AI. The moment Lee Sedol resigns in game 4 is still the cleanest on‑camera example of a human realizing he’s obsolete at something he’d mastered.
7. Novel Weekend: Neuromancer by William Gibson
Yes, the 1984 cyberpunk classic still holds up. Gibson predicted “cyberspace,” black‑market chips, and hyper‑capitalist AI agents long before we had Wi‑Fi.
8. Double‑Shot Book Pairing
Read them back‑to‑back; you’ll never look at a GPU cluster similarly.
This Week Challenge:
Watch Black Mirror S7E1.
Read the AI‑2027 “Security Forecast” section.
Write a 200‑word prompt asking Claude or GPT‑4.1 to draft “policy guardrails that avoid that exact Black‑Mirror outcome.”
Share your best prompt in the comments—the top three get a free month of the paid tier.
Stay curious, stay grounded, and remember:
The future doesn’t arrive in a single headline; it arrives frame‑by‑frame—then all at once.
Thank you for reading. See you in the next edition,
Vlad
P.S
Thank you, for recommending my newsletter. Let me know what I can do for you in the next editions.
Wow, you’re always three steps ahead, are you sure you're not moonlighting as a time traveler? 🤓 Absolutely loved this piece!
I think we won’t have enough energy power for wide application of AGI by 2027, probably it could be 2040-2050 where we can build capacity. Also, why do all futurists predict dark technological future with plans and machines everywhere? Maybe we can just have a big garden with enough food for everyone and a lot of time on our hands to enjoy life? I am looking very positive ahead.