BotBeat
...
← Back

> ▌

Nare LabsNare Labs
OPEN SOURCENare Labs2026-05-12

Nare Labs Open-Sources DSM: A Hierarchical Memory Engine for LLMs

Key Takeaways

  • ▸DSM replaces dense attention bottlenecks with a hierarchical, graph-based associative memory architecture capable of handling millions of tokens
  • ▸The system organizes knowledge into three layers—semantic segments, a dynamic category hierarchy, and an associative graph—enabling efficient multi-path routing and context selection
  • ▸Benchmarks demonstrate that even small 1.5B parameter models can achieve large-model reasoning capabilities when equipped with DSM's memory engine
Source:
Hacker Newshttps://github.com/narelabs/dsm↗

Summary

Nare Labs has released DSM (Dynamic Segmented Memory), an open-source memory engine designed to enable large language models to reason over datasets with millions of tokens. The technology replaces traditional dense attention mechanisms with a hierarchical, graph-based associative memory architecture organized into three interconnected layers: Segments (atomic text units with semantic embeddings), Hierarchy (a dynamic category tree for beam-search routing), and Graph (a semantic graph preserving associative links between segments).

The architecture addresses key limitations of standard Retrieval-Augmented Generation (RAG) systems, which often treat context as flat lists of chunks, leading to inefficiencies and context window constraints. DSM's routing mechanism uses an ensemble approach combining beam search, k-NN vector retrieval, and graph traversal to dynamically construct the most relevant context for any query, regardless of the underlying model size.

Nare Labs has benchmarked DSM on consumer-grade hardware using the Qwen-2.5-1.5B model, demonstrating that even small language models can achieve large-model reasoning capabilities when equipped with the memory engine. The project is released under the MIT License with a public GitHub repository, complete with Python APIs and quick-start examples for immediate developer adoption.

  • MIT-licensed open-source release with Python APIs makes the technology immediately accessible to developers building the next generation of AI systems

Editorial Opinion

DSM represents a significant shift in how language models can extend reasoning beyond context window limitations. By decoupling memory architecture from model parameters, Nare Labs has created infrastructure that could democratize reasoning capabilities across models of all sizes—particularly valuable for edge deployment and resource-constrained environments. This approach could enable smaller, more efficient models to compete with billion-parameter systems on tasks requiring deep reasoning. The open-source release positions DSM as foundational infrastructure for the emerging autonomous AI engineering ecosystem.

Large Language Models (LLMs)Machine LearningMLOps & InfrastructureOpen Source

Comments

Suggested

AnthropicAnthropic
OPEN SOURCE

Anthropic Releases Prempti: Open-Source Guardrails for AI Coding Agents

2026-05-12
vlm-runvlm-run
OPEN SOURCE

mm-ctx: Open-Source Multimodal CLI Toolkit Brings Vision Capabilities to AI Agents

2026-05-12
AnthropicAnthropic
PRODUCT LAUNCH

Anthropic Unleashes Computer Use: Claude 3.5 Sonnet Now Controls Your Desktop

2026-05-12
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us