Vlad's Newsletter

Vlad's Newsletter

You Will Lose to AI

No Matter How You Try to Resist

Vladyslav Podoliako's avatar
Vladyslav Podoliako
Jan 13, 2026
∙ Paid

Hey.

I want to tell you something uncomfortable.

You’ve already lost.

Not “might lose.” Not “could lose if you don’t adapt.”

You have already lost.

The outcome is decided. The only variable left is how badly.

I know that sounds dramatic. Stay with me.

raw media image
Is it Blade Runner?

Vlad's Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


My theory

They frame this as a race. You vs AI. Human creativity versus machine efficiency. Adapt or die.

That framing is broken.

You’re not racing against AI. You’re racing against other humans who use AI.

This changes everything.

Think about chess. In 1997, Deep Blue beat Kasparov. Everyone said chess was dead. Humans could never compete with machines again.

But here’s what actually happened.

Chess didn’t die. It evolved.

The best chess in the world is now played by “centaurs” - human-AI teams. A mediocre player with a good AI beats a grandmaster alone. A grandmaster with AI beats the AI alone.

The game didn’t end. The rules changed.

The people who kept playing by old rules didn’t lose to the machine. They lost to other humans who learned the new rules faster.

That’s exactly what’s happening now. Across every industry. Every profession. Every skill.

You’re not competing with GPT-5. You’re competing with the version of your competitor who woke up six months ago and started building AI into every workflow.


The Three Types of Losers

I’ve watched this play out for three years. The losing strategies fall into three categories.

1. The Deniers

“This is hype. It’ll pass. Remember NFTs?”

The deniers are still waiting. They’ve been waiting since ChatGPT launched in November 2022. They’ll keep waiting until their clients stop calling.

The denial isn’t stupid. It’s a coping mechanism.

If AI is real, everything they built their career on needs to be rebuilt. That’s terrifying. Denial is easier.

But denial has a cost. Every month of denial is a month of falling behind. And the gap compounds.

2. The Philosophers

“We need to think carefully about this. What are the ethical implications? How do we ensure AI serves humanity?”

These questions matter. They’re also a trap.

The philosophers spend so much time thinking about AI that they never learn to use it. They can give you a brilliant 45-minute lecture on alignment risks and labor displacement.

They can’t build a workflow that saves them 10 hours a week.

Philosophy is important. But it’s also a form of sophisticated procrastination.

You can think about the future or you can build skills for it. Most philosophers are doing the former while pretending they’re doing the latter.

3. The Tourists

“I use ChatGPT sometimes. I’m not behind.”

This is the most dangerous category. Because it feels like an adaptation.

The tourists have accounts. They’ve tried the tools. They can make small talk about AI at dinner parties.

But they’ve never:

  • Pushed past the first plateau

  • Iterated on a single output more than three times

  • Built a system that compounds

Using AI occasionally is like going to the gym once a month. You’re technically exercising. You’re not getting stronger.

The tourists will be blindsided. Because they think they’re prepared. They’re not.


What Winning Actually Looks Like

I know a copywriter who charges $15,000 per content, really good content. She used to spend 40 hours on each one. Research, drafts, revisions, polish.

Now she spends 8 hours.

She didn’t get faster by typing quicker. She built a system:

  • AI handles research synthesis

  • AI handles first drafts

  • AI handles variation generation

  • She handles strategy, voice calibration, and the final 20%

Her output quality went up. Her time went down. Her effective hourly rate quintupled.

But here’s the part that matters:

She didn’t just adopt AI. She encoded her expertise into the system.

Her prompts contain 15 years of copywriting knowledge:

  • What makes a hook work

  • Why certain structures convert

  • The rhythm of sentences that hold attention

She translated tacit knowledge into explicit instructions.

The AI doesn’t replace her judgment. It amplifies it.

Losers ask: “Will AI take my job?”

Winners ask: “How do I encode my judgment so AI multiplies my output?”


The Encoding Problem

AI doesn’t automatically know what you know. It doesn’t absorb your taste by proximity. It can’t read your mind about what “good” looks like in your domain.

You have to tell it. Explicitly. In detail. Through iteration.

This is the encoding problem. And solving it is the entire game.

Think about a chef.

They have what’s called “palate memory.” Thousands of flavor combinations are stored in their brain. They taste something and instantly know what’s missing, what’s too strong, what needs balance.

That knowledge is useless to AI unless it’s encoded.

But watch what happens when a chef actually encodes their expertise:

“When balancing acidity, start at 0.5% by weight and increase in 0.1% increments. Stop when brightness appears but before the source becomes identifiable. The dish should taste ‘lifted’ not ‘lemony.’”

That’s encoding. Translating intuition into instruction.

The chef who does this can produce variations 100x faster:

  • AI generates options based on encoded principles

  • Chef tastes and refines

  • Each refinement adds to the encoding

  • The system gets smarter

The chef who doesn’t encode? Still doing everything by hand. While their competitor ships a new menu every month.

Your job isn’t to compete with AI. Your job is to encode your expertise so deeply that AI becomes an extension of your judgment.


The Compound Effect

Here’s where it gets interesting.

Encoding creates a flywheel:

  1. The more you encode → the better your AI outputs become

  2. Better outputs → faster iteration

  3. Faster iteration → more learning

  4. More learning → better encoding

  5. Repeat

Someone who started this process two years ago isn’t 2x ahead of you.

They’re 100x ahead. Because each cycle builds on the previous one.


I’ve been using AI for writing assistance for over three years now. My prompts have gone through maybe 200-300 major iterations.

Version 1: “Help me write a post for X.”

Version 50: Detailed persona, voice examples, structural requirements, anti-patterns to avoid.

Version 200: A system that knows my thinking patterns well enough to draft things I actually want to publish.

Someone starting today with Version 1 isn’t slightly behind.

They’re starting a race where I’ve already run 200 laps.

And still I’m behind people who started before me.

The flywheel doesn’t wait.


The Four Layers of AI Mastery

There is one framework I like to use, so I will try to share some examples.

Layer 1: Consumption

You use AI for one-off tasks:

  • Write this email

  • Summarize this document

  • Answer this question

Most people live here. It’s the kiddie pool.

You get value, but it’s linear. Every task requires a new prompt. Nothing compounds.


Layer 2: Production

You use AI to create outputs at scale:

  • Generate content

  • Analyze data

  • Produce variations

This is where the tourists think they are. They’re not, usually.

Real production means volume. Dozens or hundreds of outputs per week, not per month. The iteration rate is what separates Layer 2 from Layer 1.


Layer 3: Systematization

You build workflows and processes around AI:

  • Prompts become templates

  • Templates become systems

  • Systems run semi-autonomously

This is where the copywriter I mentioned lives. She’s not prompting from scratch each time. She’s running a machine she built.


Layer 4: Encoding

You translate your expertise into artifacts that persist:

  • Your judgment lives in the system

  • The system produces outputs that reflect your taste, your standards, and your domain knowledge

This is the goal. And almost nobody is here.


Layer 1: AI as tool

Layer 2: AI as production multiplier

Layer 3: AI as workflow infrastructure

Layer 4: AI as extension of self

Where are you? Where do you need to be?

Why Claude Code (and Now Cowork) Matter

Claude Code proved something important over the past few months: agent-based AI works for large projects.

Give it a detailed plan. It breaks the plan into subtasks. It executes them methodically. No distractions. No context-switching. No forgetting what it was doing.

I’ve used it not just for programming but for working with large amounts of text data. It’s a real time-saver where consistency and attention to detail are essential.

Claude Cowork extends this to everything else.

The same systematic execution. The same ability to handle multi-step tasks. But now for:

  • Document management

  • Calendar scheduling

  • Presentation creation

  • Data processing

  • File organization

Security has always been a concern with agents. Claude Code asks permission by default before making changes to files or system settings. Cowork appears to have found the same balance between autonomy and control.

This is another sign that 2026 is the year of agents. Anthropic, OpenAI, Google — all are moving toward giving AI not just the ability to answer questions, but to perform multi-step tasks with access to tools.

The only question is how quickly these tools become available beyond corporate clients and enthusiasts with $100/month subscriptions.

For now, Anthropic is testing demand among those willing to pay. But the pattern is clear: what’s expensive today becomes accessible tomorrow.

The Claude Code Playbook

I wanted to share more about Claude Code, bc I’m obsessed with it.

User's avatar

Continue reading this post for free, courtesy of Vladyslav Podoliako.

Or purchase a paid subscription.
© 2026 Vladyslav Podoliako and Belkins Inc · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture