← Back to Notes

Topological Memory: Thesis

Published Mar 2026 synthesis Topological Memory Sandy Chaos Continuity Yggdrasil Retrieval

Topological Memory: Thesis

Topological Memory is a simple idea:

memory works better when it keeps track of connections, not just isolated pieces of text.

Most retrieval systems are good at finding words that match a query. That is useful, but it often falls apart when the real question is something like:

Those are not just keyword questions. They are relationship questions.

That is the core claim here:

If continuity problems are relational, retrieval should be relational too.

What “topological” means here

In this context, topological does not mean exotic physics or abstract math for its own sake.

It just means we care about the shape of connection between things.

Instead of storing memory as a pile of disconnected notes, we treat it more like a map:

Then retrieval can do more than say “this has similar words.” It can also say:

Why this matters

As projects spread across repos, chats, drafts, scripts, and automation loops, the hard problem stops being storage. The hard problem becomes continuity.

A few common failure modes:

Topological Memory is an attempt to make those failures less common by treating relationship and provenance as first-class parts of memory.

A concrete example

Imagine asking:

What should I pick back up next in this project?

A flat search system might return the newest file with the right words. A topological system should do better. It should be able to surface something like:

In other words, it should return not just a result, but a path.

What this is not claiming

This is a deliberately bounded thesis.

It is not claiming:

This is a practical research claim inside a bounded system. It should stand or fall on whether it improves retrieval in a measurable way.

What a minimal version looks like

A basic Topological Memory system needs five things:

  1. a small graph model for nodes, edges, and traces
  2. a few ordinary baselines, like keyword and recency search
  3. a topology-aware retriever that can show its path
  4. a benchmark set of real continuity questions
  5. metrics that tell us whether it is actually helping

That is enough to test the idea without turning it into a giant theory machine.

What would count as success

This idea should only be taken seriously if it does two things:

  1. beats at least one simpler baseline on real continuity tasks, and
  2. gives path outputs that humans can actually understand

If it cannot do both, then it may still be a helpful way to think, but it should not be promoted as a real retrieval layer.

Where it fits right now

Inside the current ecosystem, the split is clean:

That separation matters. It keeps the experimental side fast while keeping the durable side disciplined.

Closing

Topological Memory is, at heart, a claim about what memory should preserve.

Not just content. Not just recency. Not just word overlap.

It should also preserve:

If that turns out to improve continuity in practice, then the idea has earned its place. If not, it should stay a useful sketch and nothing more.

Links

Source code repository for this project.

GitHub