Every single interaction with AI is a dice roll. A cosmic lottery where the same prompt can get you gold or garbage, genius or gibberish.
Oh god. This will be a long one….
I’ve been immersed in various AI systems since my short break. ChatGPT, Claude, Gemini, Grok for text. Kling, Sora, Midjourney for visuals. Suno, ElevenLabs for audio.
You might not even know all of them anyway.
And here’s what I’ve learned: we’re not talking to one AI. We’re talking to infinite versions of it.
Started thinking about this while writing my piece on AI consciousness. Remember that one? Where I questioned whether these systems actually “understand” anything?
Well, here’s the kicker.
Even if they did understand, each conversation spawns a different instance. A different “person” every single time.
And that changes everything about how we should be using these tools.
Industry shifting
On Monday, earlier this week, OpenAI held its DevDay.
They announced Apps in ChatGPT.
You can now call Canva, Spotify, Figma, Coursera, Booking, Zillow, and Expedia by name during a conversation. They’ve released an Apps SDK so developers can build their own. They’re turning ChatGPT into an operating system where hundreds of millions of users will soon live.
Oh, and they casually mentioned that Sora 2 is here, with creative partners like Mattel already using it for rapid concept-to-video workflows.
Btw, I really enjoyed using Sora, that’s a lot of fun, but copyright fights on the horizon:
You can find my profile here -> https://sora.chatgpt.com/profile/stbelkins
And in the background, barely making headlines: AMD and OpenAI just announced a strategic partnership to deploy 6 gigawatts of AMD GPU capacity.
This isn’t just about diversifying away from Nvidia. This is OpenAI preparing for a future where they might need to own their chip supply entirely.
We might see Nvidia hit a $10 trillion (or more) valuation in the near future as AI demand explodes. Everyone’s too dependent on them, and the smart money is already hedging.
But here’s what almost nobody is talking about in the wake of these announcements:
None of this matters if you don’t understand how to extract value from AI in the first place.
And most people don’t.
Most people are treating AI like a vending machine. Insert prompt, get output, done. They’re getting maybe 10% of the value these tools can provide.
I’ve spent the last year generating tens of thousands of AI outputs across text, images, and audio. I’ve burned through API credits, exhausted free tiers, and lost entire afternoons to iteration loops.
And I’ve learned something the AI companies absolutely do not want you to know:
To create something truly valuable with AI, you need to treat it like a lottery where you control how many tickets you buy.
The Problem Nobody’s Explaining
Every single time you interact with an AI system, you’re getting a completely different instance.
An instance is a fresh generation, a new roll of the dice. Even if you use the exact same prompt twice, the AI will give you different outputs. Not slightly different. Sometimes radically different.
This isn’t a bug. This isn’t something that will be “fixed” in GPT-5 pro or Claude 4.5. This is fundamental to how these systems work.
These models don’t have answers stored somewhere. They generate responses in real time, word by word, token by token. At each step, they’re sampling from a probability distribution. The word they choose influences what comes next. Change one early word, and the entire trajectory shifts.
By the time you’re 50 words into a response, you’re thousands of probability forks away from where you started.
That’s why the same prompt can yield a generic corporate email or a punchy, memorable message. That’s why one image generation gives you bland stock photo vibes and another stops people scrolling.
Most people generate once, maybe twice, and accept whatever they get.
They’re leaving millions on the table. Not metaphorically. Literally.
The Real Numbers (And Why They Matter)
Let me show you what proper iteration actually looks like, with real numbers from my own work.
Music Production: The 847 Generation Project
I’ve been working on finishing three techno tracks. I had all the stems ready—drums, bass, melodic layers, percussion. ( will send Spotify link in one of the next newsletters in case you're wondering)
Quick sidebar: a stem in music production is an individual track layer. One ingredient in the recipe. Your drum stem, your bass stem, your vocal stem. Stack them together, you get a complete song.
What I was missing: vocals that actually fit the vibe.
So I used Suno and ElevenLabs to generate vocals from my lyric ideas.
Here’s what the real numbers looked like:
847 total vocal generations across three tracks
23 that were technically usable (correct key, tempo, no glitches)
6 that actually fit the vibe of each track
2 that made it into the final mix
That’s a 0.2% success rate.
Time investment: roughly 12 hours of generation and review.
Cost: around $180 in API credits and subscription fees.
Now here’s the important part: could I have hired a session vocalist for $180 and 12 hours? Absolutely.
But here’s what I got that I couldn’t have gotten otherwise:
Voices that don’t exist in the real world
Perfect repeatability (need another take? Regenerate)
Instant A/B testing of different vocal styles
The ability to iterate on lyrics and melody simultaneously
Zero coordination overhead, zero scheduling
More importantly: I now have a system. Next time I need vocals, I know the hit rate. I know how many generations to budget. I know which prompting patterns yield better results.
That system is worth 10x the $180 I spent learning it.
OpenAI’s DevDay Makes This More Important
Now let’s connect this back to what just got announced.
Apps in ChatGPT means you’re about to call specialized tools during your conversation. Mid-prompt, you can spin up Canva to design something, Zillow to pull property data, Figma to mock up a UI.
This is a phase shift.
Right now, you use ChatGPT for text. If you need an image, you switch to Midjourney. If you need data, you export to a spreadsheet. Each tool is siloed.
Soon, it’s all one continuous flow. You’re having a conversation, and tools appear when needed.
Here’s what this means for the instances lottery:
The iteration space just got exponentially larger.
Before: you iterate within one domain. Text, or images, or code.
Soon: you iterate across domains in a single session. You can generate 10 versions of a pitch deck, pull real market data for each version, generate supporting visuals, refine the narrative based on the data, regenerate the visuals, and export the final version, all without leaving the chat.
The people who already understand iteration will adapt instantly. They’ll know to generate 10 slide outlines, then 5 data visualizations per slide, then 3 design variations per viz.
The people who don’t? They’ll generate once per step and wonder why their output looks like everyone else’s.
This is about to create a massive skill gap.
Let’s say you’re planning a trip. You could:
Ask ChatGPT for destination ideas (generate 10)
For each destination, pull real hotel options from Booking (5 per destination)
For the top 3 destinations, generate itineraries (3 variations each)
Pull flight prices from Expedia for each itinerary
Generate budget breakdowns
Create a visual comparison in Canva
Iterate on the final choice with real-time data
That’s not 10 generations. That’s 10 × 5 × 3 × 3 = 450 possible paths through the decision tree.
The person who explores 450 paths makes a better decision than the person who explores 1.
Most people will still explore 1 path, because they don’t understand that the cost of exploring 450 is now nearly zero.
The Overlooked Economic Advantage
Let me be blunt about something nobody’s saying out loud:
Mastering the instances lottery is the highest ROI skill you can learn right now.
Traditional high-value skills take years to develop. Learning to write well, design, code, analyze data, all take thousands of hours.
Learning to extract value from AI through iteration? You can get 80% of the way there in a month of focused practice.
But the output quality difference between someone who iterates 10x and someone who doesn’t?
Easily 5-10x better results.
Think about what that means economically:
You can produce work that’s 5-10x better than your peers, using a skill that takes 1/100th the time to learn.
That’s not a competitive advantage. That’s an unfair advantage.
And it’s temporary. In 2-3 years, everyone will know this. Right now? Maybe 5% of AI users understand it.
The $100k Consultant Secret
Want to know who’s really making bank with AI?
A consultant I know charges Fortune 500 companies $100k for “AI transformation strategies.”
Keep reading with a 7-day free trial
Subscribe to Vlad's Newsletter to keep reading this post and get 7 days of free access to the full post archives.