Memory that persists
across sessions
Give your LLM agents a structured, searchable memory layer.
File-based, version-controlled, and provider-agnostic.
npm install @memel06/mnemonioEverything your agent needs to remember
A complete memory toolkit: store, search, extract, and consolidate knowledge across conversations.
Persistent Memory
Markdown files with YAML frontmatter. Human-readable, git-friendly, and portable across tools and providers.
Semantic Search
Find relevant memories by meaning, not just keywords. LLM-powered ranking returns what actually matters.
MCP Server
Drop-in integration for any MCP-compatible client. Save, search, delete, and manage memories — your AI assistant gets 8 tools automatically.
Auto-Extraction
Automatically extract durable facts from conversations. Preferences, decisions, and corrections captured without manual work.
Memory Distillation
Consolidate over time: merge duplicates, prune stale entries, tighten wording. Your memory stays clean.
Team Memory
Shared read-only directory for coding standards, onboarding notes, and conventions. Commit it to your repo.
Your agents keep forgetting. Fix that.
LLM agents lose context every session. Mnemonio gives them durable knowledge that compounds over time.
Files, not databases
Plain markdown you can read, edit, and version-control. No vendor lock-in, no opaque storage layer.
Any LLM provider
One callback interface works with OpenAI, Anthropic, OpenRouter, or any compatible endpoint. Swap providers without changing memory code.
Self-maintaining
Distillation automatically merges duplicates, prunes stale entries, and tightens wording. Memory quality improves over time, not degrades.
Three interfaces, one system
MCP server for AI assistants, CLI for humans, TypeScript library for custom agents. Same memory, everywhere.
Three steps to persistent agent memory
Mnemonio stores memories as structured markdown, indexes them in a manifest, and integrates via a simple LLM callback.
Store as Markdown
Each memory is a .md file with YAML frontmatter: name, type, tags, and expiry. Human-readable and git-friendly.
name: testing-approach
type: directive
tags: [testing, database]
---
Index in Manifest
MANIFEST.md holds one-line pointers to every memory file. Auto-truncated when injected into prompts to respect token budgets.
- testing-approach.md
- user-preferences.md
- project-context.md
Connect Your LLM
Provide a callback function or use resolveLlm() with env vars. Mnemonio handles search, extraction, and distillation.
memoryDir: './.mnemonio',
llm: resolveLlm(),
});
Up and running in minutes
MCP server, CLI, or TypeScript library. Pick the interface that fits your workflow.
// Add to your MCP client settings (e.g. claude_desktop_config.json) { "mcpServers": { "mnemonio": { "command": "npx", "args": ["-p", "@memel06/mnemonio", "mnemonio-mcp"], "env": { "MNEMONIO_DIR": "./.mnemonio", "MNEMONIO_BASE_URL": "https://your-llm-provider.com", "MNEMONIO_MODEL": "your-model" } } } } // Or use a .env file in your project root: // MNEMONIO_API_KEY=your-api-key // MNEMONIO_BASE_URL=https://your-llm-provider.com // MNEMONIO_MODEL=your-model
import { createMnemonioStore, resolveLlm } from '@memel06/mnemonio'; // Quick start with env-based LLM config const store = createMnemonioStore({ memoryDir: './.mnemonio', llm: resolveLlm(), }); // Initialize memory directory await store.ensureDir(); // Inject memory into your system prompt const memoryPrompt = await store.buildPrompt(); // Semantic search const results = await store.findRelevant('database testing approach'); // Extract memories from a conversation await store.extract({ messages: [ { role: 'user', content: "Don't mock the DB in integration tests." }, { role: 'assistant', content: 'Understood, using real database.' }, ], }); // Consolidate: merge duplicates, prune stale entries await store.distill({ force: true });
# Install globally $ npm install -g @memel06/mnemonio # Initialize a memory directory $ mnemonio init .mnemonio # List all memories with descriptions and age $ mnemonio list .mnemonio --type directive # Semantic search (requires MNEMONIO_API_KEY) $ mnemonio search "testing approach" .mnemonio # Run consolidation pass $ mnemonio distill .mnemonio --force # Show stats $ mnemonio stats .mnemonio # Include team memory in search $ mnemonio search "coding standards" .mnemonio --team-dir ./team-memory # Prune stale memories $ mnemonio prune .mnemonio --max-age 90 --dry-run
Built for every agent workflow
From coding assistants to autonomous agents, Mnemonio fits wherever LLMs need persistent context.
AI Coding Assistants
Your copilot remembers project conventions, testing preferences, and past decisions. No more repeating yourself.
Autonomous Agents
Long-running agents that accumulate knowledge across tasks. Extract facts, distill periodically, stay focused.
Dev Tools & Pipelines
Embed memory in your toolchain. CLI for scripts, library for custom integrations, MCP for plug-and-play.
Team Onboarding
New team members' agents instantly know your coding standards, architecture decisions, and project history.
Multi-Session Workflows
Debugging across sessions? Mnemonio preserves the investigation context so you pick up where you left off.
Knowledge Management
Structure institutional knowledge as typed, searchable memories. Identity, directives, context, and bookmarks.
Shared context for the whole team
A read-only memory directory that gives every developer's agent the same baseline knowledge.
Commit to your repo
Team memory lives in version control. Every clone gets the same conventions, standards, and context.
Read-only by design
Agents can read team memory but never modify it. Write operations only touch the private directory.
Path traversal protection
Symlinks, ../ segments, and null bytes are blocked. validateTeamWritePath keeps the boundary safe.
Unified search
Semantic search spans both private and team memories. One query, combined results, ranked by relevance.
// MCP config with team memory { "mcpServers": { "mnemonio": { "command": "npx", "args": ["-p", "@memel06/mnemonio", "mnemonio-mcp"], "env": { "MNEMONIO_DIR": "./.mnemonio", "MNEMONIO_TEAM_DIR": "./team-memory" } } } }
const store = createMnemonioStore({ memoryDir: './.mnemonio', teamDir: './team-memory', }); // Combined prompt includes both sources const prompt = await store.buildCombinedPrompt(); // Validate paths before team writes const safe = await store.validateTeamWritePath('notes.md'); const isTeam = store.isTeamPath(somePath);
# Include team memory in any read command $ mnemonio list .mnemonio --team-dir ./team-memory $ mnemonio search "coding standards" .mnemonio --team-dir ./team-memory $ mnemonio stats .mnemonio --team-dir ./team-memory
TypeScript API
Full control over memory operations. Every method is typed, documented, and composable.
โ No LLM Required
ensureDir()— Create memory dir + MANIFEST.mdscan()— List all memory files with metadatareadEntrypoint()— Read MANIFEST.md with truncationbuildPrompt()— Build memory context for promptsbuildCombinedPrompt()— Include team memorystats()— File count, size, type breakdownformatManifest()— Human-readable manifest
โ LLM Required
findRelevant(query)— Semantic searchextract(config)— Extract facts from conversationsdistill(config?)— Consolidate and clean up
โ Lock Management
readLastDistilledAt()— Last distillation timestamptryAcquireLock()— Acquire distillation lockrollbackLock()— Restore lock on failure
โ Team Security
validateTeamWritePath()— Safe path resolutionisTeamPath()— Check team directory membership
LLM Callback Interface
type LlmCallback = (params: { system: string; messages: ReadonlyArray<{ role: 'user' | 'assistant'; content: string }>; maxTokens: number; }) => Promise<string>; // Works with any provider - just wire up your client const llm: LlmCallback = async ({ system, messages, maxTokens }) => { const res = await yourClient.chat({ model: 'your-model', max_tokens: maxTokens, system, messages }); return res.text; };
Structured, typed, human-readable
Every memory is a markdown file with YAML frontmatter. Four types cover the full spectrum of agent knowledge.
Identity
User role, goals, preferences, expertise. Tailors agent behavior to the person.
Directive
Corrections and confirmed approaches. Behavioral guidance the agent should follow.
Context
Ongoing work, decisions, timelines, incidents. The evolving state of the project.
Bookmark
Pointers to external systems, docs, dashboards. Quick references for the agent.
--- name: testing-approach description: Team prefers integration tests with real DB type: directive tags: [testing, database] expires: 2026-12-31 --- Always use the real database for integration tests. > reason: A prior incident where mocked tests > passed but production broke on a schema change. > scope: Use test containers or a dedicated test > database. Never mock the DB layer in integration > suites.
Frontmatter fields:
name, description, type — all optional. The body is freeform markdown.
tags — freeform labels; use them to pre-filter memory_list and memory_search before the LLM step.
expires — ISO date; memories past this date are automatically excluded from all operations.
Command-line interface
Manage memories from your terminal. Every command supports JSON output for scripting.
All commands default to the current directory. Add --team-dir ./team-memory to include shared memory. Add --json for machine-readable output.
Works with any LLM provider
Auto-detects from the base URL, or set MNEMONIO_PROVIDER explicitly.
| Provider | Detection | Auth | Set via |
|---|---|---|---|
| OpenAI | URL contains openai.com |
Bearer token |
MNEMONIO_API_KEY |
| Anthropic | URL contains anthropic.com |
x-api-key header |
MNEMONIO_API_KEY |
| OpenRouter | URL contains openrouter.ai |
Bearer token |
MNEMONIO_API_KEY |
| Any compatible | Fallback for all other URLs | Bearer token |
MNEMONIO_API_KEY |