Wednesday morning I typed this to my AI assistant:
"A lot of issues lately in our connections had to restart again. Please use our last few chats and the json file and summary in the memory discord archive."
Four minutes later, it was fully grounded โ every piece of AI agent memory rebuilt from markdown files.
It knew about the โฌ6K/month Pitcocy revenue. It knew I'd hit a 160 kg deadlift for 3x5 last Saturday. It knew I'd spec'd a new AI Audit offer the night before. It knew I don't drink. It knew my dog's name is Delphy.
The agent had died. The files hadn't.
What actually happened
I run a custom Claude Code setup on my Mac Mini. It talks to me through Discord, listens via Whisper when I send a voice message, replies through macOS say when I want it to. On top of that there are a few recurring loops. Every 2 hours it sends me a check-in. Every Thursday at 9:17 AM it checks if its own loops are still alive (yes, the snake eats its tail). Every evening at 8 PM it serves me a "brain snack", some random fact, anything from why coffee evolved as a pesticide to which Romanian engineer crashed the first jet aircraft in 1910.
This week, the API connection blew up at 2:37 AM with an ECONNRESET error. The Claude Code cron runtime doesn't auto-retry on those. One bad packet and the whole session died. Every loop, every scheduled task, the live memory of who I am and what we were working on, all of it gone in one go.
By morning the loops were silent. I noticed because my check-in didn't arrive.
So I rebooted Claude Code and sent the four-word version of what was needed: use the archive.
What was in "the archive"
Three things, all just markdown:
-
A
context/alfred/folder โ my background, my 2026 goals, my values, how I write, my reading list. The "this is who I am" baseline. -
A
context/memory/folder โ daily session logs, learnings (patterns my AI notices about how I work), decisions (with reasoning), ideas (the 85% trash pile and the 15% gold), and reflections (verbatim Discord exchanges I marked as worth keeping). -
A Discord history export โ 1,238 messages from the last 7 weeks, exported via a small script I wrote on May 12, plus an archive summary that compresses the whole period into a ~5,000-word callout document.

All of it sits in a regular git repository. No vector store. No custom framework. No special database. The agent reads it. That's it.
When I rebooted, Claude pulled the summary first, then fanned out into the specific files relevant to today. Memory rebuilt itself in less time than it takes to make coffee.
The architecture lesson: stateless agents, durable files
There's a phrase I've used here before: Claude Code is stateless. Your files are the memory.

This week, that became more than a slogan.
Most people building AI agents reach for the same toolkit. Vector databases, LangChain pipelines, custom memory systems, persistent agent state. Sometimes you actually need that stuff. Most of the time you're building plumbing that won't survive its first hardware reset.
A simple folder of markdown files survives a lot of things that fancy pipelines don't:
- Power outages
- API errors
- Session timeouts
- Model upgrades
- Me switching laptops
The agent is disposable. The files are not.
If you design for the agent to last, you'll keep getting heart attacks. If you design for the files to last, the agent dying becomes a non-event.
What survived vs. what died
After the crash, the honest breakdown.
Died:
- The active session memory ("I was midway through writing this spec")
- The recurring loops (had to be re-created in the new session)
- The live task list
Survived:
- Every line of code in every repo
- Every markdown note (
CLAUDE.md,SOUL.md,understanding-alfred.md) - The session logs, decisions, learnings, reflections
- The Discord history export, every message, in JSON, grep-friendly
Process state died. Artifacts survived. The artifacts were enough to rebuild the process state in four minutes.
Hotel, not tent
I keep coming back to a metaphor I use with my coaching clients: think of your AI setup like a hotel, not a tent.

A tent is what you carry with you. If you lose it, you lose everything inside it. A hotel is somewhere you go to. The hotel doesn't care if you leave for a week. The room is still there when you return.
Most people are building AI agents like tents. State lives inside the agent. When the agent dies, the state dies with it.
I build mine like a hotel. The state lives in the building (the repo). The agent is the guest. Guests come and go. The building stays.
That's why this week's ECONNRESET didn't feel like a disaster. It felt like a guest checking out.
How to build AI memory that survives a crash
You don't need my exact setup. You need three things.
-
A folder structure for context that lives outside your AI tool. Whatever the tool reads (a project directory, a Drive folder, an Obsidian vault), make sure it's yours, not the tool's. If the tool dies tomorrow, your folder still exists.
-
A discipline of writing things down. Every conversation that produced a decision: log it. Every pattern you notice about yourself or your work: log it. Don't trust your head, and definitely don't trust the agent's head either.
-
A way to compress and bring context back. When you start a new session, you don't load 100 files. You load one summary, and the agent fans out from there. Mine is
archive-summary.md. Yours could beCLAUDE.md, an instructions doc, or a single onboarding note.
Combine those three and your AI gets harder to kill.
The point
This week's near-disaster was actually a stress test. The system passed. Not because the agent was robust. It absolutely wasn't. But because the memory layer was a separate, durable thing.
If you're building anything with AI right now, ask yourself one question:
If the agent died right now, what would survive?
If the answer is "everything important", you've built it right.
If the answer is "I don't know", you have homework.
The better your markdown files are, the less you'll fear the next restart. :D

About Alfred Simon
AI Systems Builder & Coach
I build custom AI systems for marketing teams โ search term analysis, ad creation, competitor research, reporting โ all automated. I write about context management, AI workflows, and the messy reality of building things with AI. No theory. No hype. Just what actually works after 30+ agents and a very healthy trash pile :D
Want to build something like this for your team? Let's talk.
