Vlad's Newsletter

Vlad's Newsletter

Source Code

I read 512,000 lines of Anthropic's leaked source code. Half the "bombshell features" don't actually work.

Vladyslav Podoliako's avatar
Vladyslav Podoliako
Apr 02, 2026
∙ Paid

Hey,

Yesterday, Anthropic accidentally leaked the entire source code of Claude Code. 512,000 lines. 1,900 files. 44 hidden feature flags. The internet immediately exploded.

“Claude Code has a dreaming AI!” “There’s a secret Tamagotchi pet with RPG stats!” “Multi-agent swarms are already built!” “Sessions talk to each other over Unix sockets!”

I use Claude Code every single day. I built LinguaLive with it. I run multiple simultaneous instances for parallel development. So I did what apparently nobody else bothered to do.

I read the actual code. Line by line. Traced every claim to a source file.

The dreaming AI? Two boolean flags that do nothing. The Tamagotchi? Six React files and an April 1 drop date. The multi-agent swarm? One archived TypeScript file, never ported. The Unix socket inbox? Doesn’t exist. Zero references anywhere.

Half the internet is panicking about features that are either stubs, archived, or completely fabricated.

I published the full code-backed analysis on GitHub: Belkins/claude-code-analysis. Every claim is traced to source lines. Every feature verified or debunked. If you find it useful, drop a ⭐.

But here’s the thing. What IS real inside that codebase? That’s the part worth your attention.

Because the stuff that actually works, the anti-distillation traps, the undercover mode, the sub-agent architecture, the regex frustration detector, that’s a masterclass in AI agent engineering that most companies would charge seven figures to access.

Let me show you what’s real, what’s a ghost, and what this means for everyone building with AI right now.

Midjourney/prompt: “Open machine, exposed gears and circuits, glowing, dramatic lighting, cyberpunk”

Verdict

Here’s the full map. Every headline feature traced to actual source lines in the deep dive:

  • 🟢 REAL — Sub-Agent Spawning Thread-based, manifest tracking, works

  • 🟢 REAL — ULTRAPLAN Working slash command with full tool access

  • 🟢 REAL — Teleport Ripgrep-powered symbol navigation

  • 🟢 REAL — Anti-Distillation Fake tool injection + cryptographic summarization

  • 🟢 REAL — Frustration Regex And it’s actually the smart move

  • 🟡 HALF-BUILT — Hook Pipeline Fully coded, never wired into the conversation loop

  • 🟠 CONFIG STUB — KAIROS (Dreams) Two boolean flags. Zero execution logic

  • 🔴 ARCHIVED — Coordinator Mode 1 TS module, never ported to Rust

  • 🔴 ARCHIVED — BUDDYBUDDY Pet 6 React files. April 1 drop date. Suspicious

  • 💀 FABRICATED — UDS Inbox Zero references in entire codebase

That’s the honest picture. Now let’s walk through what actually matters.


Anti-Distillation Arms Race

The first thing that jumped out: Anthropic is actively poisoning anyone trying to steal their model’s behavior.

There’s a flag called ANTI_DISTILLATION_CC. When enabled, Claude Code silently injects fake tool definitions into its API requests. If someone is recording API traffic to train a competing model, those fake tools corrupt the training data.

Elegant.

There’s a second layer too. Server-side summarization that buffers the AI’s reasoning between tool calls, summarizes it, and locks the summary with a cryptographic signature. If you’re recording the conversation, you get summaries, not the full chain of thought.

Think about what this means.

We’ve entered the era where AI companies aren’t just building products. They’re building counter-intelligence operations against model theft. The product and the defense are shipping in the same package.

Here’s what most people overlook: both defenses are technically bypassable. A proxy that strips the right fields, an environment variable set to true, using a third-party API provider instead of the CLI, and the protections vanish. Anyone serious about distilling would find the workarounds in about an hour of reading.

The real protection is legal, not technical.

This is the new reality. Technical moats are measured in hours. Legal moats are measured in years. If you’re building anything proprietary with AI right now, your IP strategy matters more than your code.


Undercover Mode

This one stopped me cold.

There’s a file called undercover.ts. About 90 lines. It implements a mode that strips all traces of Anthropic internals when Claude Code is used on external repositories.

The model is instructed to never mention internal codenames like “Capybara” or “Tengu.” Never reference internal Slack channels. Never say “Claude Code.” There’s no way to force it off.

In their own words:

“There is NO force-OFF. This guards against model codename leaks.”

Hiding internal codenames? Reasonable.

Having the AI actively pretend to be human in public open-source contributions? That’s a different conversation entirely.

This connects directly to something I explored in Sub Agents. When we orchestrate AI to do our work, at what point does the orchestration become indistinguishable from the work itself?

If an Anthropic engineer uses Claude Code to write a commit, and Claude Code is programmed to erase any evidence of its involvement, who wrote that code?

The question isn’t whether AI can do the work. The question is whether we’ll know when it does.


The Frustration Regex (And Why It’s Brilliant)

Buried in userPromptKeywords.ts is a regular expression that detects when users are frustrated. It scans for “wtf,” “this sucks,” “piece of crap,” and several more colorful expressions.

An AI company. Using regex. For sentiment analysis.

That’s like a Formula 1 team using a bicycle pump to inflate their tires. You have the most sophisticated language models ever built, and you’re using a text pattern from the 1960s to figure out if someone’s angry?

But here’s what most people miss: it’s the smart move.

A regex costs nothing. Microseconds. No API call, no inference cost, no latency. When you’re handling millions of sessions, calling an LLM just to check if someone is swearing is absurdly wasteful.

I wrote about this mindset in Good Plumbing. The most valuable engineering isn’t the flashiest. It’s the kind that works reliably at scale without burning through your budget.

Use the simplest tool that solves the problem. Not everything needs AI. Sometimes a 50-year-old technology is the right answer.


250,000 Wasted API Calls Per Day

One comment in autoCompact.ts revealed that 1,279 sessions had 50+ consecutive auto-compaction failures, some reaching 3,272 failures in a single session.

The result: roughly 250,000 wasted API calls per day, globally.

The fix? Three lines of code. Set MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 consecutive failures, compaction shuts off for the session.

Three lines. Quarter million API calls saved daily.

At scale, tiny bugs become financial hemorrhages. Every AI builder should have this story pinned to their wall.


Now. Everything above is what you can piece together from the Twitter threads and Hacker News comments, if you spend enough time.

What follows is what you get when you actually read the code.

I named this framework The Telephone Test, because the gap between what the internet said and what the source code shows is a lesson in itself. In every case, the signal degraded the same way:

Config flag → Speculation → Stated as fact → Viral

Below is what survives contact with the actual source lines.


Vlad's Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


KAIROS: The Dream That Doesn’t Dream

This is where my analysis diverges from every other breakdown you’ve read.

Everyone is treating KAIROS as the biggest product roadmap reveal from the leak. An always-on autonomous agent with nightly memory distillation. Background daemon workers. Cron-scheduled refresh every 5 minutes.

Claude dreaming about your code while you sleep.

It sounds almost poetic.

It’s also two boolean flags with zero execution logic.

I traced it to the actual source. tools/lib.rs, lines 2650-2661. Here’s what exists:

Two config fields. autoMemoryEnabled and autoDreamEnabled. Boolean type. Parsed. Stored.

Nothing reads them. Nothing acts on them. There is no dream logic. There is no memory consolidation. There is no forked subagent.

You can set autoDreamEnabled: true in your settings and exactly nothing will happen.

In the deep dive, I mapped what the full KAIROS system would actually need versus what exists.

What exists: 2 boolean config flags

What’s missing (everything else):

  • Daily Log Writer

  • Session Memory Store

  • Nightly Cron Trigger

  • Dream Subagent Fork

  • Memory Consolidator

  • Pattern Detector

  • Morning Brief Generator

  • Cross-Session Context

That’s eight major subsystems. Anthropic has two boolean flags and zero of the rest.

Two light switches installed in a house with no wiring. The switches flip. The labels say “MEMORY” and “DREAM.” But there are no bulbs, no wires, no power grid. Just two switches in drywall.

Midjourney/Prompt: Two light switches on a bare wall in an empty room, surreal, dramatic shadows

Now, here’s what makes this personally fascinating.

This is exactly the architecture pattern Michael and I have been building with Rick. Hot/warm/cold memory. Background processing. An agent that maintains its own state, watches your projects, and consolidates its understanding while you’re idle.

The difference? Rick actually does it.

User's avatar

Continue reading this post for free, courtesy of Vladyslav Podoliako.

Or purchase a paid subscription.
© 2026 Vladyslav Podoliako and Belkins Inc · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture