Nanobot Memory Enhancement — Implementation Plan
Date: 2026-03-25 Based on: OpenClaw memory research Goal: Bring openclaw-style long-term memory into nanobot with minimal core loop changes
Current State (What We Have)
File-based memory (nanobot/agent/memory.py)
MemoryStore: managesmemory/MEMORY.md(long-term curated facts)MemoryConsolidator: LLM-driven consolidation that archives old messages into MEMORY.md when context exceeds half the window- MEMORY.md content is injected into system prompt via
ContextBuilder - Consolidation uses forced
save_memorytool call with fallback to raw archiving
Vector memory (nanobot/agent/vector_memory.py)
VectorMemoryManager: indexesworkspace/docs/<project>/with hybrid BM25 + vector search- SQLite + sqlite-vec + FTS5, embeddings via litellm
- Background sync loop (60s interval), project-scoped search
VectorSearchTool: exposesvector_searchtool to the agent
Sessions (nanobot/session/manager.py)
- JSONL files in
workspace/sessions/(one per channel:chat_id) - Metadata line + append-only message dicts
get_history()returnsmessages[last_consolidated:]
What’s Missing (vs OpenClaw)
No dated memory files✅ Implemented (Phase 1)No pre-compaction memory flush✅ Implemented (Phase 2)Memory files not searchable✅ Implemented (Phase 3)No✅ Implemented (Phase 4)memory_searchtoolHISTORY.md redundant✅ Removed (replaced by dated memory files + memory_search)No temporal decay✅ Implemented (separate config for memory vs docs)
What We’re Building
Four features, in order of implementation:| # | Feature | Inspired By | Core Loop Impact |
|---|---|---|---|
| 1 | Session-end dated memory files | OpenClaw session-memory hook | 1 line in /new handler |
| 2 | Pre-compaction memory flush | OpenClaw memory-flush.ts | 1 call before consolidation |
| 3 | Memory file indexing | OpenClaw memory-core indexing | Extend VectorMemoryManager |
| 4 | memory_search tool | OpenClaw memory_search tool | 1 tool registration |
- Current hybrid BM25 + vector search algorithm
- Current embedding pipeline (litellm)
- Current consolidation logic (MemoryConsolidator)
- Current session JSONL format
- MEMORY.md in system prompt
Phase 1: Session-End Dated Memory Files
Goal: When/new is called, summarize the ending session into memory/YYYY-MM-DD-slug.md.
New class: SessionMemoryWriter
File: nanobot/agent/memory.py (add to existing file)
Logic
- Take the last
max_messagesuser/assistant messages from the session snapshot (skip tool calls/results for the LLM summary input) - If fewer than 3 user messages, skip (trivial session, not worth persisting)
- Call the LLM with a dedicated prompt:
- System: “You are a memory writer. Generate a concise summary of this conversation.”
- Include a
create_session_memoryforced tool call with parameters:slug: short kebab-case filename slug (max 40 chars, e.g. “api-design-review”)summary: markdown summary of key topics, decisions, and facts
- Write to
memory/YYYY-MM-DD-slug.md:
- If LLM call fails, write a raw transcript excerpt as fallback (same pattern as
_raw_archivein MemoryStore)
Integration point in loop.py
Current /new handler (line 476):
Config addition (schema.py)
ToolsConfig alongside vector_memory:
Files modified
nanobot/agent/memory.py— addSessionMemoryWriterclass (~80 lines)nanobot/agent/loop.py— addself.session_memory_writerinit + 1 line in/newhandlernanobot/config/schema.py— addSessionMemoryConfig
Phase 2: Pre-Compaction Memory Flush
Goal: Beforemaybe_consolidate_by_tokens starts archiving messages, run a dedicated LLM turn that writes durable memories to memory/YYYY-MM-DD.md.
New class: MemoryFlusher
File: nanobot/agent/memory.py (add to existing file)
Logic
- Trigger condition:
estimated_tokens > context_window - threshold_tokens - Guard: Skip if already flushed for this session key in current compaction cycle (reset on session clear)
- Deduplicate: Read existing
memory/YYYY-MM-DD.mdto avoid writing duplicate content - Call LLM with system prompt:
- If LLM returns
[silent], skip write - Otherwise, append to
memory/YYYY-MM-DD.md(create if not exists):
Integration point
InMemoryConsolidator.maybe_consolidate_by_tokens() (memory.py line 302), add a flush call before the consolidation loop:
Config addition (schema.py)
Files modified
nanobot/agent/memory.py— addMemoryFlusherclass (~70 lines), modifyMemoryConsolidator.__init__to accept itnanobot/agent/loop.py— passMemoryFlusherinstance toMemoryConsolidatornanobot/config/schema.py— addMemoryFlushConfig
Phase 3: Memory File Indexing
Goal: ExtendVectorMemoryManager to also index memory/MEMORY.md and memory/*.md files, making them searchable alongside project docs.
Changes to VectorMemoryManager
File: nanobot/agent/vector_memory.py
1. Extend _discover_files() to include memory files
Current behavior: walks docs/<project>/ directories only.
New behavior: also discovers memory/MEMORY.md and memory/*.md, using a reserved project name _memory.
2. Add self.memory_dir in __init__
3. Add search_memory() convenience method
No changes to search algorithm
The existing hybrid BM25 + vector search works identically — memory files are just another “project” in the same SQLite database.Files modified
nanobot/agent/vector_memory.py— extend_discover_files(), addmemory_dir, addsearch_memory()
Phase 4: memory_search Tool
Goal: Give the agent a dedicated tool to search its own memory files.
New file: nanobot/agent/tools/memory_search.py
Registration in loop.py
In _register_default_tools(), after existing vector_search registration:
System prompt update (context.py)
Update the workspace description in _get_identity() to mention memory_search:
Files modified
nanobot/agent/tools/memory_search.py— new file (~50 lines)nanobot/agent/loop.py— 2 lines (import + register)nanobot/agent/context.py— update system prompt text
Architecture After Implementation
Data flow: session lifecycle
Implementation Order & Effort
| Phase | Feature | New Code | Modified Files | Effort |
|---|---|---|---|---|
| 1 | Session-end dated memory files | ~80 lines | memory.py, loop.py, schema.py | 2-3 hours |
| 2 | Pre-compaction memory flush | ~70 lines | memory.py, loop.py, schema.py | 2-3 hours |
| 3 | Memory file indexing | ~20 lines | vector_memory.py | 1 hour |
| 4 | memory_search tool | ~50 lines | new file + loop.py, context.py | 1-2 hours |
Dependencies
- Phase 3 and 4 require vector memory to be enabled (
tools.vector_memory.enabled: true) - Phase 1 and 2 are independent of vector memory — they write plain markdown files
- Phase 4 depends on Phase 3 (memory files must be indexed to be searchable)
- Phase 1 and 2 can be implemented in parallel
What We’re NOT Doing (and why)
| Feature | Reason to skip |
|---|---|
| Multiple embedding providers | We already have litellm which supports all providers |
| LanceDB plugin | Separate concern, memory-core approach is sufficient |
| QMD backend | Experimental in openclaw, unnecessary complexity |
| File watcher (chokidar) | Background sync loop already catches changes on next cycle |
| Embedding cache table | litellm handles caching; our hash-based skip logic is sufficient |
| Session JSONL indexing | Dated memory files already capture session content in better form |
| MMR re-ranking | Not enabled by default in openclaw either; keep simple |
| Temporal decay | Not enabled by default in openclaw; our recency is implicit via dated files |
| Plugin slot system | Nanobot doesn’t have plugins; direct integration is simpler |
| Auto-recall/auto-capture | Too invasive for “minimal core loop changes” goal |
Config Reference (Final)
Risk Mitigation
- LLM call failures: Both
SessionMemoryWriterandMemoryFlusherhave fallback to raw text dump (same pattern as existing_fail_or_raw_archive) - Duplicate writes:
MemoryFlushertracks flushed sessions per-compaction-cycle;SessionMemoryWritergenerates unique slugs per session - Disk space: Dated memory files are small (1-5 KB each). At one session per day, that’s ~1.5 MB/year
- Index consistency: VectorMemoryManager’s periodic sync naturally picks up new memory files within 60 seconds
- Backward compatibility: All new features are opt-in via config. Defaults match current behavior except
session_memory.enabled: true(which is safe — it only writes files on/new)
