Happy Friday everyone!
The mass adoption of AI is here. Last week, the ChatGPT 4o Ghiblitization of the Internet happened.
Everyone saw these right?
From the last week you can see all these Ghibli Studio-style animations everywhere.
I even convert everything I want into it.
If you missed it, use GPT-4o with this simple prompt:
Covert this image to Ghibli Studio style
Voila, you've generated it. If you want to share your work, put it here in the comments.
And you can do much more with it. One more example:

I will write the entire edition on “Adoption” because, from my experience, this is quite a hellish process that we have faced multiple times with our products and AI is currently in that stage and is passing extremely successfully.
As you know, I'm raising both my hands, cheering for AI to move faster, faster, faster, and become the tool that transforms humanity.
It has already started but there’s still so much more we can do with it.
Recap
Welcome to the second edition of my paid newsletter! If you caught the first edition:
You know I promised to delve deeper into two big topics:
My Memory Bank approach for prompts—a system I use to push AI to its limits
How to handle AI context windows for bigger projects without losing track
Today, we’re tackling both. I’ll show you exactly how I keep my VS Code with Cline (my AI co-developer) and AIs in the face of Claude, ChatGPT, DeepSeek and Gemini “in the loop” across resets, ensuring continuity in multi-file projects.
Then we’ll compare Claude 3.7 Sonnet and Google Gemini 2.5 Pro because context windows are becoming the new arms race in AI.
1) The Memory Bank
In the first edition, you heard me talk about my “Memory Bank.”
The idea is simple: My AI partner (Cline, Cursor, or whatever you use) loses all internal memory each session, so it needs an external “source of truth” to remain consistent.
That’s where the Memory Bank structure comes in.
The hardest thing to do is untangle in your own brain all the stuff you know that Claude does not know and write it down.
Why a Memory Bank?
Zero Retention: LLMs (like Claude or ChatGPT) can’t always “remember” your entire codebase or doc references across sessions.
Explicit Organization: By storing everything in structured Markdown files, you can force the AI to read only the relevant docs at the start of each new conversation.
Scalable: As your project grows, you add more sections—API docs, architecture outlines, user requirements, etc.—all well-labeled and easy for the AI to parse.
Keep reading with a 7-day free trial
Subscribe to Vlad's Newsletter to keep reading this post and get 7 days of free access to the full post archives.