Roadmap
Future directions for the cognitive memory model. Contributions welcome.
Planned
- Auto-scaling FAISS nlist -- Currently
nlist=100(suitable up to ~100K memories). For larger stores, nlist should grow proportionally with N (e.g., nlist proportional to sqrt(N)) to maintain true O(1) retrieval at any scale. - Persistent entity index rebuilding -- When loading from disk, re-extract entities if the spaCy model has been updated.
Under Consideration
- Multi-modal memories -- Store and retrieve memories from images, audio, and structured data alongside text. Would require multi-modal embeddings.
- Continual learning integration -- Use memory access patterns and consolidated semantic memories to inform model fine-tuning. Frequently recalled information could become candidates for weight updates.
- Hardware acceleration -- Replace FAISS with TCAM (Ternary Content-Addressable Memory), neuromorphic chips, or in-memory compute (memristor crossbars) for true O(1) parallel matching. The memory semantics are hardware-independent; FAISS is a software bridge.
- Streaming ingestion -- Real-time memory encoding from live conversation streams (WebSocket, SSE) rather than turn-by-turn API calls.
- Memory visualization dashboard -- Web UI for browsing, searching, and managing memories. Show the entity graph, activation chains, and decay curves.
- Cross-language support -- Multilingual embedding models for non-English conversations.
Contributing
See docs/DEVELOPMENT_PLAN.md for the full implementation history and architectural decisions. See the benchmarks methodology for evaluation methodology.