Give your LLM agents a structured, searchable memory layer.
File-based, version-controlled, and provider-agnostic.
npm install @memel06/mnemonioA complete memory toolkit: store, search, extract, and consolidate knowledge across conversations.
Markdown files with YAML frontmatter. Human-readable, git-friendly, and portable across tools and providers.
Find relevant memories by meaning, not just keywords. LLM-powered ranking returns what actually matters.
Drop-in integration for any MCP-compatible client. Your AI assistant gets memory without any code changes.
Automatically extract durable facts from conversations. Preferences, decisions, and corrections captured without manual work.
Consolidate over time: merge duplicates, prune stale entries, tighten wording. Your memory stays clean.
Shared read-only directory for coding standards, onboarding notes, and conventions. Commit it to your repo.
LLM agents lose context every session. Mnemonio gives them durable knowledge that compounds over time.
Plain markdown you can read, edit, and version-control. No vendor lock-in, no opaque storage layer.
One callback interface works with OpenAI, Anthropic, OpenRouter, or any compatible endpoint. Swap providers without changing memory code.
Distillation automatically merges duplicates, prunes stale entries, and tightens wording. Memory quality improves over time, not degrades.
MCP server for AI assistants, CLI for humans, TypeScript library for custom agents. Same memory, everywhere.
Mnemonio stores memories as structured markdown, indexes them in a manifest, and integrates via a simple LLM callback.
Each memory is a .md file with YAML frontmatter: name, type, tags, and expiry. Human-readable and git-friendly.
MANIFEST.md holds one-line pointers to every memory file. Auto-truncated when injected into prompts to respect token budgets.
Provide a callback function or use resolveLlm() with env vars. Mnemonio handles search, extraction, and distillation.
MCP server, CLI, or TypeScript library. Pick the interface that fits your workflow.
// Add to your MCP client settings (e.g. claude_desktop_config.json) { "mcpServers": { "mnemonio": { "command": "npx", "args": ["-p", "@memel06/mnemonio", "mnemonio-mcp"], "env": { "MNEMONIO_DIR": "./.mnemonio", "MNEMONIO_BASE_URL": "https://your-llm-provider.com", "MNEMONIO_MODEL": "your-model" } } } } // Or use a .env file in your project root: // MNEMONIO_API_KEY=your-api-key // MNEMONIO_BASE_URL=https://your-llm-provider.com // MNEMONIO_MODEL=your-model
import { createMnemonioStore, resolveLlm } from '@memel06/mnemonio'; // Quick start with env-based LLM config const store = createMnemonioStore({ memoryDir: './.mnemonio', llm: resolveLlm(), }); // Initialize memory directory await store.ensureDir(); // Inject memory into your system prompt const memoryPrompt = await store.buildPrompt(); // Semantic search const results = await store.findRelevant('database testing approach'); // Extract memories from a conversation await store.extract({ messages: [ { role: 'user', content: "Don't mock the DB in integration tests." }, { role: 'assistant', content: 'Understood, using real database.' }, ], }); // Consolidate: merge duplicates, prune stale entries await store.distill({ force: true });
# Install globally $ npm install -g @memel06/mnemonio # Initialize a memory directory $ mnemonio init .mnemonio # List all memories with descriptions and age $ mnemonio list .mnemonio --type directive # Semantic search (requires MNEMONIO_API_KEY) $ mnemonio search "testing approach" .mnemonio # Run consolidation pass $ mnemonio distill .mnemonio --force # Show stats $ mnemonio stats .mnemonio # Include team memory in search $ mnemonio search "coding standards" .mnemonio --team-dir ./team-memory # Prune stale memories $ mnemonio prune .mnemonio --max-age 90 --dry-run
From coding assistants to autonomous agents, Mnemonio fits wherever LLMs need persistent context.
Your copilot remembers project conventions, testing preferences, and past decisions. No more repeating yourself.
Long-running agents that accumulate knowledge across tasks. Extract facts, distill periodically, stay focused.
Embed memory in your toolchain. CLI for scripts, library for custom integrations, MCP for plug-and-play.
New team members' agents instantly know your coding standards, architecture decisions, and project history.
Debugging across sessions? Mnemonio preserves the investigation context so you pick up where you left off.
Structure institutional knowledge as typed, searchable memories. Identity, directives, context, and bookmarks.
A read-only memory directory that gives every developer's agent the same baseline knowledge.
Team memory lives in version control. Every clone gets the same conventions, standards, and context.
Agents can read team memory but never modify it. Write operations only touch the private directory.
Symlinks, ../ segments, and null bytes are blocked. validateTeamWritePath keeps the boundary safe.
Semantic search spans both private and team memories. One query, combined results, ranked by relevance.
// MCP config with team memory { "mcpServers": { "mnemonio": { "command": "npx", "args": ["-p", "@memel06/mnemonio", "mnemonio-mcp"], "env": { "MNEMONIO_DIR": "./.mnemonio", "MNEMONIO_TEAM_DIR": "./team-memory" } } } }
const store = createMnemonioStore({ memoryDir: './.mnemonio', teamDir: './team-memory', }); // Combined prompt includes both sources const prompt = await store.buildCombinedPrompt(); // Validate paths before team writes const safe = await store.validateTeamWritePath('notes.md'); const isTeam = store.isTeamPath(somePath);
# Include team memory in any read command $ mnemonio list .mnemonio --team-dir ./team-memory $ mnemonio search "coding standards" .mnemonio --team-dir ./team-memory $ mnemonio stats .mnemonio --team-dir ./team-memory
Full control over memory operations. Every method is typed, documented, and composable.
ensureDir() — Create memory dir + MANIFEST.mdscan() — List all memory files with metadatareadEntrypoint() — Read MANIFEST.md with truncationbuildPrompt() — Build memory context for promptsbuildCombinedPrompt() — Include team memorystats() — File count, size, type breakdownformatManifest() — Human-readable manifestfindRelevant(query) — Semantic searchextract(config) — Extract facts from conversationsdistill(config?) — Consolidate and clean upreadLastDistilledAt() — Last distillation timestamptryAcquireLock() — Acquire distillation lockrollbackLock() — Restore lock on failurevalidateTeamWritePath() — Safe path resolutionisTeamPath() — Check team directory membershiptype LlmCallback = (params: { system: string; messages: ReadonlyArray<{ role: 'user' | 'assistant'; content: string }>; maxTokens: number; }) => Promise<string>; // Works with any provider - just wire up your client const llm: LlmCallback = async ({ system, messages, maxTokens }) => { const res = await yourClient.chat({ model: 'your-model', max_tokens: maxTokens, system, messages }); return res.text; };
Every memory is a markdown file with YAML frontmatter. Four types cover the full spectrum of agent knowledge.
User role, goals, preferences, expertise. Tailors agent behavior to the person.
Corrections and confirmed approaches. Behavioral guidance the agent should follow.
Ongoing work, decisions, timelines, incidents. The evolving state of the project.
Pointers to external systems, docs, dashboards. Quick references for the agent.
--- name: testing-approach description: Team prefers integration tests with real DB type: directive tags: [testing, database] expires: 2026-12-31 --- Always use the real database for integration tests. > reason: A prior incident where mocked tests > passed but production broke on a schema change. > scope: Use test containers or a dedicated test > database. Never mock the DB layer in integration > suites.
Frontmatter fields:
name, description, type, tags, expires — all optional. The body is freeform markdown.
Manage memories from your terminal. Every command supports JSON output for scripting.
All commands default to the current directory. Add --team-dir ./team-memory to include shared memory. Add --json for machine-readable output.
Auto-detects from the base URL, or set MNEMONIO_PROVIDER explicitly.
| Provider | Detection | Auth | Set via |
|---|---|---|---|
| OpenAI | URL contains openai.com |
Bearer token |
MNEMONIO_API_KEY |
| Anthropic | URL contains anthropic.com |
x-api-key header |
MNEMONIO_API_KEY |
| OpenRouter | URL contains openrouter.ai |
Bearer token |
MNEMONIO_API_KEY |
| Any compatible | Fallback for all other URLs | Bearer token |
MNEMONIO_API_KEY |