📖 10 min read
- AI memory is the new lock-in. Every platform stores your preferences, habits, and context in proprietary formats you can’t move.
- Claude just launched memory import from ChatGPT (March 2026) — but it’s still silo-to-silo, not true portability.
- We’re in the “pre-SMTP moment” for AI memory. Before email was standardized, CompuServe couldn’t talk to AOL. Sound familiar?
- The most portable memory system right now? Plain markdown files (OpenClaw’s MEMORY.md). Accidentally genius.
- Whoever writes the “SMTP of AI memory” — the open standard — captures enormous value. Watch Memobase and the open-source community.
- Your AI memory knows more about you than your therapist. This is a data rights issue, not just a convenience problem.
Imagine Switching Phones and Losing Everything
Here’s a thought experiment. You buy a new phone. You turn it on. And it’s blank. No contacts. No messages. No photos. No saved Wi-Fi passwords. No app logins. No call history. Nothing. Every relationship, every preference, every piece of context you’ve built over years — gone. You’re starting from zero.
You’d riot. You’d return the phone. You’d tweet about it (and probably go viral).
📧 Want more like this? Get our free The 2026 AI Playbook: 50 Ways AI is Making People Rich — Join 2,400+ subscribers
Now here’s the thing: that’s exactly what happens every time you switch AI platforms today.
You’ve spent months — maybe years — training ChatGPT to understand how you think. It knows you prefer concise answers. It knows your coding style. It knows you’re working on a startup, that you hate corporate jargon, that you take your coffee black. It’s learned you.
And if you want to try Claude? Gemini? A local model? You walk in naked. All that context, all that learned behavior — trapped behind OpenAI’s walls.
We’ve Been Here Before. It Was Called Email.
In the late 1980s, if you had a CompuServe email account, you could only email other CompuServe users. AOL users could only email AOL users. MCI Mail talked to MCI Mail. Every service was a walled garden with its own proprietary protocol.
Then someone did something radical: they wrote a standard. SMTP — Simple Mail Transfer Protocol. Suddenly, it didn’t matter what service you used. CompuServe could email AOL. AOL could email a university server. Email became universal.
That one standard — boring, technical, unsexy — is the reason email works today. It’s the reason you can switch from Gmail to Outlook to ProtonMail and still get your messages. It’s the reason email is email and not a collection of incompatible fiefdoms.
AI memory is in the pre-SMTP moment right now.
Every platform has memory. ChatGPT has it. Claude has it. Google’s building it. But none of them can talk to each other. None of them use the same format. And none of them — with one notable exception we’ll get to — let you truly own and move your data.
We need the SMTP of AI memory. And whoever writes it will reshape the entire industry.
The Current Landscape: A Memory Systems Tour
Let’s map the battlefield. There are more players in AI memory than you’d expect, and they’re all approaching the problem differently.
ChatGPT Memory — The Incumbent’s Moat
OpenAI’s memory system is the most widely used, simply because ChatGPT has the most users. It stores your preferences internally — things you’ve told it, patterns it’s noticed, facts about your life and work. Recently, they added an export feature. Progress? Sure. But the format is ChatGPT-specific. It’s like letting you download your CompuServe emails… as CompuServe files that nothing else can read.
Let’s be blunt: OpenAI benefits enormously from memory lock-in. The more ChatGPT knows about you, the harder it is to leave. That’s not a bug in their strategy — it’s the feature. Your accumulated context is their moat.
Claude Memory — The Smart Competitive Move
Anthropic just launched memory import in March 2026. You can now bring your ChatGPT memories into Claude. Smart move. It lowers the switching cost, which is exactly what a challenger should do.
But let’s not confuse a bigger walled garden with an open field. You’re still moving from one proprietary system to another. Claude’s memory import is a competitive weapon, not a portability standard. If Anthropic becomes dominant, don’t expect them to make it easy to leave either.
OpenClaw (MEMORY.md) — Accidentally the Best Approach
There’s something beautifully subversive about this. While everyone else is building complex memory architectures with vector databases and knowledge graphs, OpenClaw just… writes markdown files. It’s the equivalent of keeping your contacts in a text file instead of a proprietary database. Crude? Maybe. Portable? Absolutely. You could copy your MEMORY.md to any system, any AI, any platform, and it just works.
Mem0 — The Developer’s Memory Layer
Mem0 is open-source and designed for developers building AI applications. It stores memories as vector embeddings with a Neo4j knowledge graph underneath. Works with LangChain, CrewAI, and the usual suspects. Powerful — but this is infrastructure for builders, not portability for users. Your grandmother isn’t going to spin up a Neo4j instance.
Memobase — The Closest to Getting It Right
Memobase (memobase.ai) is the most interesting player here, because they’re explicitly trying to solve the portability problem. Open source. Designed as a memory layer that works across all your AI tools. The core premise: memory belongs to the user, not the platform.
Still early. Still finding product-market fit. But the concept is exactly right. If anyone writes the SMTP of AI memory, it might come from this direction.
The Rest of the Field
Zep — Enterprise-grade shared memory for teams. Great if you’re building a company AI assistant. Not solving the personal portability problem.
Letta (formerly MemGPT) — Fascinating architecture where the LLM manages its own memory, deciding what to remember and forget. But it’s framework-specific, not a portability play.
Hindsight — Newer entrant with a genuinely different architecture. Worth watching, too early to judge.
Memvid — The wild card. Stores memories in video format. No infrastructure needed. It’s such an unhinged approach that it might actually work for certain use cases. Or it might be a curiosity. Either way, respect the creativity.
The Comparison
| System | Type | Portability (1-10) | Open Source? | Best For | Lock-in Risk |
|---|---|---|---|---|---|
| ChatGPT Memory | Platform-native | 2 | No | Existing ChatGPT users | Very High |
| Claude Memory | Platform-native + import | 4 | No | Users switching from ChatGPT | High |
| OpenClaw (MEMORY.md) | Local markdown files | 9 | Yes | Power users, privacy-conscious | Minimal |
| Mem0 | Vector + knowledge graph | 5 | Yes | Developers building AI apps | Medium |
| Memobase | Universal memory layer | 8 | Yes | Cross-platform memory ownership | Low |
| Zep | Enterprise shared memory | 4 | Partial | Teams and enterprise AI | Medium-High |
| Letta (MemGPT) | Self-managed LLM memory | 3 | Yes | Researchers, framework users | Medium |
| Hindsight | Novel architecture | 5 | Yes | Early adopters, experimenters | Medium |
| Memvid | Video-based storage | 6 | Yes | Zero-infra use cases | Low |
Notice the pattern? The most portable solutions are the simplest ones. Plain text beats proprietary databases every time. This shouldn’t surprise anyone — it’s the same reason CSV outlived a thousand database formats.
Why This Matters More Than You Think
Your AI memory isn’t just a convenience feature. It’s rapidly becoming your digital identity.
Think about what’s in there. Your communication preferences. Your work projects and deadlines. Your coding style. Your dietary restrictions. Your relationship dynamics. Your health concerns you casually mentioned. Your financial situation. The inside jokes you’ve built up. The way you think.
This isn’t a settings file. This is you — or at least, a surprisingly accurate model of you. And right now, it’s trapped in whatever platform you happened to start using first.
The switching cost isn’t just inconvenience. It’s identity loss. And every day you keep using one platform, the cost of leaving gets higher. That’s not a market — that’s a trap.
What the SMTP of AI Memory Would Look Like
If we’re going to build a universal standard, what would it actually contain? Here’s a rough sketch:
user_profile — Name, preferences, communication style, timezone
learned_preferences — Response length, tone, formality, technical level
facts — Things you’ve stated as true about yourself and your world
relationships — People, organizations, and connections you’ve mentioned
conversation_summaries — Compressed context from past interactions
tools_and_workflows — How you use tools, preferred approaches, automations
timestamps — When each memory was created and last referenced
Format: JSON or Markdown. Human-readable. Structured enough for vector search, simple enough for any LLM to ingest directly.
The key requirements: it has to be human-readable (so you can audit what’s stored about you), machine-parseable (so any AI can ingest it), and not owned by any single company. The moment one company controls the standard, we’re back to square one.
This is, incidentally, why markdown might win by default. It’s the cockroach of data formats — it survives everything, every system can read it, and no one owns it.
Who’s Positioned to Win
Let’s be strategic about this.
OpenAI will fight portability. Their entire business model depends on you staying. Memory lock-in is the most powerful retention mechanism they have. Expect them to add more memory features while keeping the format proprietary. They’ll frame it as “better user experience” when what they mean is “harder to leave.”
Anthropic is playing chess, not checkers. The memory import feature is brilliant competitive positioning — it says “we’ll let you bring your memories over” while still building their own walled garden. Smart for gaining market share. Not the same as true portability.
OpenClaw’s markdown files are accidentally genius. By choosing the simplest possible format — plain text on your own machine — they sidestepped the entire portability problem. No migration needed. No import/export. Just files. It’s the approach that looks primitive until you realize it’s the only one that actually works everywhere.
Memobase is the one to watch. They’re explicitly building the universal memory layer. Open source, user-owned, cross-platform. If anyone builds the SMTP of AI memory, it’ll probably look a lot like what they’re doing.
The open-source community will likely win this — just like they won with email (SMTP), the web (HTTP), identity (OAuth), and everything else that became critical infrastructure. Standards written by committees of competitors always beat proprietary formats in the long run. Always.
The Investment Thesis
Here’s what to watch for:
An open standard will emerge — probably within 18-24 months. It’ll be messy at first, like OAuth was messy. Multiple competing proposals. Heated GitHub debates. Eventually, consolidation around something that works well enough.
Companies building on portable memory will have lower churn. Counterintuitively, making it easy to leave makes people stay. This is the lesson Spotify learned (easy playlist export), the lesson banks learned (easier account switching = more trust), and the lesson AI companies will learn the hard way.
The “SMTP of AI memory” hasn’t been written yet. Whoever writes it — the actual RFC, the reference implementation, the first widely-adopted standard — captures enormous value. Not by owning the standard (SMTP isn’t owned), but by being the infrastructure everyone builds on.
This is a picks-and-shovels play. The gold rush is in AI models. The real money is in the memory infrastructure underneath.
The Privacy Angle You’re Not Thinking About
Here’s where it gets uncomfortable.
Your AI memory knows more about you than your therapist. Literally. Your therapist sees you for an hour a week. Your AI is with you all day, every day. It knows your work anxieties, your relationship dynamics, your health worries, your financial decisions, the 2 AM questions you’d never ask anyone else.
And right now, all of that lives on someone else’s server, in someone else’s format, under someone else’s terms of service.
This isn’t just a portability issue. It’s a rights issue.
The EU will regulate this. It’s inevitable. GDPR already established the principle that personal data belongs to the person. AI memory is personal data on steroids — it’s not just what you’ve shared, it’s what the AI has inferred about you. Your predicted preferences. Your behavioral patterns. Your likely future decisions.
Memory portability isn’t just convenient. It’s the mechanism by which you exercise ownership over your digital self. Without it, you’re a tenant in someone else’s model of you.
What You Should Do Right Now
This isn’t just a think piece. Here’s the call to action:
1. Export your AI memories today. If your platform offers an export, use it. Even if the format is proprietary, having a copy is better than having nothing.
2. Try a portable alternative. OpenClaw’s MEMORY.md approach, Memobase, or even just keeping your own markdown file of “things my AI should know about me.” Own your context.
3. Demand portability from your AI tools. When you evaluate new AI products, ask: “Can I export my memory? Can I import it elsewhere? What format is it in?” If they can’t answer, that tells you everything about their priorities.
4. Support open standards. When someone proposes a universal memory format — and they will — support it. Star the repo. Give feedback. Use it. Standards win by adoption, not by mandate.
The pre-SMTP era of email lasted about a decade before standardization won. We don’t have to wait that long for AI memory. The tools exist. The concepts are proven. The open-source community is already building.
The only question is whether we demand portability now — or wait until our digital identities are so deeply embedded in proprietary systems that escaping becomes functionally impossible.
Your memories. Your data. Your identity. Your choice.
Don’t let anyone else own you.