Topological Memory: Thesis
Topological Memory: Thesis
Topological Memory is a simple idea:
memory works better when it keeps track of connections, not just isolated pieces of text.
Most retrieval systems are good at finding words that match a query. That is useful, but it often falls apart when the real question is something like:
- where did this idea come from?
- what should I resume next?
- what changed this plan?
- why is this artifact important right now?
Those are not just keyword questions. They are relationship questions.
That is the core claim here:
If continuity problems are relational, retrieval should be relational too.
What “topological” means here
In this context, topological does not mean exotic physics or abstract math for its own sake.
It just means we care about the shape of connection between things.
Instead of storing memory as a pile of disconnected notes, we treat it more like a map:
- nodes are artifacts like docs, code, tests, plans, checkpoints, or sessions
- edges are the links between them, like
depends_on,mentioned_in,tested_by, orresumes - traces are the marks left by meaningful activity, like edits, reviews, promotions, or checkpoints
Then retrieval can do more than say “this has similar words.” It can also say:
- this note matters because it led to that implementation,
- this checkpoint matters because it connects two later decisions,
- this result surfaced because several related paths converged here.
Why this matters
As projects spread across repos, chats, drafts, scripts, and automation loops, the hard problem stops being storage. The hard problem becomes continuity.
A few common failure modes:
- you find the note, but not the reason it mattered
- you find the latest file, but not the thing it replaced
- you recover a decision, but not the evidence behind it
- you retrieve text, but not the next useful move
Topological Memory is an attempt to make those failures less common by treating relationship and provenance as first-class parts of memory.
A concrete example
Imagine asking:
What should I pick back up next in this project?
A flat search system might return the newest file with the right words. A topological system should do better. It should be able to surface something like:
- this research note introduced the idea,
- this TODO item kept it alive,
- this checkpoint shows where it stalled,
- this test or implementation is the next real place to continue.
In other words, it should return not just a result, but a path.
What this is not claiming
This is a deliberately bounded thesis.
It is not claiming:
- that human memory is literally graph traversal,
- that geometric language automatically explains cognition,
- that this proves anything deep about consciousness,
- or that a nice metaphor is the same thing as a working mechanism.
This is a practical research claim inside a bounded system. It should stand or fall on whether it improves retrieval in a measurable way.
What a minimal version looks like
A basic Topological Memory system needs five things:
- a small graph model for nodes, edges, and traces
- a few ordinary baselines, like keyword and recency search
- a topology-aware retriever that can show its path
- a benchmark set of real continuity questions
- metrics that tell us whether it is actually helping
That is enough to test the idea without turning it into a giant theory machine.
What would count as success
This idea should only be taken seriously if it does two things:
- beats at least one simpler baseline on real continuity tasks, and
- gives path outputs that humans can actually understand
If it cannot do both, then it may still be a helpful way to think, but it should not be promoted as a real retrieval layer.
Where it fits right now
Inside the current ecosystem, the split is clean:
- Sandy Chaos is where the idea gets modeled, benchmarked, and pressure-tested
- Yggdrasil is where validated continuity mechanisms eventually become part of the control layer
That separation matters. It keeps the experimental side fast while keeping the durable side disciplined.
Closing
Topological Memory is, at heart, a claim about what memory should preserve.
Not just content. Not just recency. Not just word overlap.
It should also preserve:
- relation,
- provenance,
- consequence,
- and the path by which one thing became another.
If that turns out to improve continuity in practice, then the idea has earned its place. If not, it should stay a useful sketch and nothing more.
Links
Source code repository for this project.
GitHub