v0.1.5 — Now with team memory

Memory that persists
across sessions

Give your LLM agents a structured, searchable memory layer.
File-based, version-controlled, and provider-agnostic.

npm install @memel06/mnemonio

Everything your agent needs to remember

A complete memory toolkit: store, search, extract, and consolidate knowledge across conversations.

Persistent Memory

Markdown files with YAML frontmatter. Human-readable, git-friendly, and portable across tools and providers.

Semantic Search

Find relevant memories by meaning, not just keywords. LLM-powered ranking returns what actually matters.

MCP Server

Drop-in integration for any MCP-compatible client. Your AI assistant gets memory without any code changes.

Auto-Extraction

Automatically extract durable facts from conversations. Preferences, decisions, and corrections captured without manual work.

Memory Distillation

Consolidate over time: merge duplicates, prune stale entries, tighten wording. Your memory stays clean.

Team Memory

Shared read-only directory for coding standards, onboarding notes, and conventions. Commit it to your repo.

Your agents keep forgetting. Fix that.

LLM agents lose context every session. Mnemonio gives them durable knowledge that compounds over time.

๐Ÿ“‚

Files, not databases

Plain markdown you can read, edit, and version-control. No vendor lock-in, no opaque storage layer.

๐Ÿ”Œ

Any LLM provider

One callback interface works with OpenAI, Anthropic, OpenRouter, or any compatible endpoint. Swap providers without changing memory code.

๐Ÿงน

Self-maintaining

Distillation automatically merges duplicates, prunes stale entries, and tightens wording. Memory quality improves over time, not degrades.

๐Ÿ”ง

Three interfaces, one system

MCP server for AI assistants, CLI for humans, TypeScript library for custom agents. Same memory, everywhere.

Three steps to persistent agent memory

Mnemonio stores memories as structured markdown, indexes them in a manifest, and integrates via a simple LLM callback.

1

Store as Markdown

Each memory is a .md file with YAML frontmatter: name, type, tags, and expiry. Human-readable and git-friendly.

---
name: testing-approach
type: directive
tags: [testing, database]
---
2

Index in Manifest

MANIFEST.md holds one-line pointers to every memory file. Auto-truncated when injected into prompts to respect token budgets.

# Memory Manifest
- testing-approach.md
- user-preferences.md
- project-context.md
3

Connect Your LLM

Provide a callback function or use resolveLlm() with env vars. Mnemonio handles search, extraction, and distillation.

const store = createMnemonioStore({
  memoryDir: './.mnemonio',
  llm: resolveLlm(),
});

Up and running in minutes

MCP server, CLI, or TypeScript library. Pick the interface that fits your workflow.

// Add to your MCP client settings (e.g. claude_desktop_config.json)
{
  "mcpServers": {
    "mnemonio": {
      "command": "npx",
      "args": ["-p", "@memel06/mnemonio", "mnemonio-mcp"],
      "env": {
        "MNEMONIO_DIR": "./.mnemonio",
        "MNEMONIO_BASE_URL": "https://your-llm-provider.com",
        "MNEMONIO_MODEL": "your-model"
      }
    }
  }
}

// Or use a .env file in your project root:
// MNEMONIO_API_KEY=your-api-key
// MNEMONIO_BASE_URL=https://your-llm-provider.com
// MNEMONIO_MODEL=your-model
import { createMnemonioStore, resolveLlm } from '@memel06/mnemonio';

// Quick start with env-based LLM config
const store = createMnemonioStore({
  memoryDir: './.mnemonio',
  llm: resolveLlm(),
});

// Initialize memory directory
await store.ensureDir();

// Inject memory into your system prompt
const memoryPrompt = await store.buildPrompt();

// Semantic search
const results = await store.findRelevant('database testing approach');

// Extract memories from a conversation
await store.extract({
  messages: [
    { role: 'user', content: "Don't mock the DB in integration tests." },
    { role: 'assistant', content: 'Understood, using real database.' },
  ],
});

// Consolidate: merge duplicates, prune stale entries
await store.distill({ force: true });
# Install globally
$ npm install -g @memel06/mnemonio

# Initialize a memory directory
$ mnemonio init .mnemonio

# List all memories with descriptions and age
$ mnemonio list .mnemonio --type directive

# Semantic search (requires MNEMONIO_API_KEY)
$ mnemonio search "testing approach" .mnemonio

# Run consolidation pass
$ mnemonio distill .mnemonio --force

# Show stats
$ mnemonio stats .mnemonio

# Include team memory in search
$ mnemonio search "coding standards" .mnemonio --team-dir ./team-memory

# Prune stale memories
$ mnemonio prune .mnemonio --max-age 90 --dry-run

Built for every agent workflow

From coding assistants to autonomous agents, Mnemonio fits wherever LLMs need persistent context.

๐Ÿ’ป

AI Coding Assistants

Your copilot remembers project conventions, testing preferences, and past decisions. No more repeating yourself.

๐Ÿค–

Autonomous Agents

Long-running agents that accumulate knowledge across tasks. Extract facts, distill periodically, stay focused.

๐Ÿ› ๏ธ

Dev Tools & Pipelines

Embed memory in your toolchain. CLI for scripts, library for custom integrations, MCP for plug-and-play.

๐Ÿ‘ฅ

Team Onboarding

New team members' agents instantly know your coding standards, architecture decisions, and project history.

๐Ÿ”„

Multi-Session Workflows

Debugging across sessions? Mnemonio preserves the investigation context so you pick up where you left off.

๐Ÿ“‹

Knowledge Management

Structure institutional knowledge as typed, searchable memories. Identity, directives, context, and bookmarks.

Shared context for the whole team

A read-only memory directory that gives every developer's agent the same baseline knowledge.

๐Ÿ“

Commit to your repo

Team memory lives in version control. Every clone gets the same conventions, standards, and context.

๐Ÿ”’

Read-only by design

Agents can read team memory but never modify it. Write operations only touch the private directory.

๐Ÿ›ก๏ธ

Path traversal protection

Symlinks, ../ segments, and null bytes are blocked. validateTeamWritePath keeps the boundary safe.

๐Ÿ”

Unified search

Semantic search spans both private and team memories. One query, combined results, ranked by relevance.

// MCP config with team memory
{
  "mcpServers": {
    "mnemonio": {
      "command": "npx",
      "args": ["-p", "@memel06/mnemonio", "mnemonio-mcp"],
      "env": {
        "MNEMONIO_DIR": "./.mnemonio",
        "MNEMONIO_TEAM_DIR": "./team-memory"
      }
    }
  }
}
const store = createMnemonioStore({
  memoryDir: './.mnemonio',
  teamDir: './team-memory',
});

// Combined prompt includes both sources
const prompt = await store.buildCombinedPrompt();

// Validate paths before team writes
const safe = await store.validateTeamWritePath('notes.md');
const isTeam = store.isTeamPath(somePath);
# Include team memory in any read command
$ mnemonio list .mnemonio --team-dir ./team-memory
$ mnemonio search "coding standards" .mnemonio --team-dir ./team-memory
$ mnemonio stats .mnemonio --team-dir ./team-memory

TypeScript API

Full control over memory operations. Every method is typed, documented, and composable.

โ— No LLM Required

  • ensureDir() — Create memory dir + MANIFEST.md
  • scan() — List all memory files with metadata
  • readEntrypoint() — Read MANIFEST.md with truncation
  • buildPrompt() — Build memory context for prompts
  • buildCombinedPrompt() — Include team memory
  • stats() — File count, size, type breakdown
  • formatManifest() — Human-readable manifest

โ— LLM Required

  • findRelevant(query) — Semantic search
  • extract(config) — Extract facts from conversations
  • distill(config?) — Consolidate and clean up

โ— Lock Management

  • readLastDistilledAt() — Last distillation timestamp
  • tryAcquireLock() — Acquire distillation lock
  • rollbackLock() — Restore lock on failure

โ— Team Security

  • validateTeamWritePath() — Safe path resolution
  • isTeamPath() — Check team directory membership

LLM Callback Interface

type LlmCallback = (params: {
  system: string;
  messages: ReadonlyArray<{ role: 'user' | 'assistant'; content: string }>;
  maxTokens: number;
}) => Promise<string>;

// Works with any provider - just wire up your client
const llm: LlmCallback = async ({ system, messages, maxTokens }) => {
  const res = await yourClient.chat({ model: 'your-model', max_tokens: maxTokens, system, messages });
  return res.text;
};

Structured, typed, human-readable

Every memory is a markdown file with YAML frontmatter. Four types cover the full spectrum of agent knowledge.

identity directive context bookmark

Identity

User role, goals, preferences, expertise. Tailors agent behavior to the person.

Directive

Corrections and confirmed approaches. Behavioral guidance the agent should follow.

Context

Ongoing work, decisions, timelines, incidents. The evolving state of the project.

Bookmark

Pointers to external systems, docs, dashboards. Quick references for the agent.

---
name: testing-approach
description: Team prefers integration tests with real DB
type: directive
tags: [testing, database]
expires: 2026-12-31
---

Always use the real database for integration tests.

> reason: A prior incident where mocked tests
> passed but production broke on a schema change.

> scope: Use test containers or a dedicated test
> database. Never mock the DB layer in integration
> suites.

Frontmatter fields: name, description, type, tags, expires — all optional. The body is freeform markdown.

Command-line interface

Manage memories from your terminal. Every command supports JSON output for scripting.

mnemonio init [dir] Create memory directory with MANIFEST.md
mnemonio scan [dir] Display all memory file headers
mnemonio list [dir] --type <t> List memories with descriptions and age, optionally filtered by type
mnemonio search <query> [dir] Semantic search across all memories (LLM required)
mnemonio distill [dir] --force Run consolidation pass: merge duplicates, prune stale entries (LLM required)
mnemonio stats [dir] File count, total size, type breakdown, age range
mnemonio prune [dir] --max-age <d> Remove stale or empty files, with optional dry-run

All commands default to the current directory. Add --team-dir ./team-memory to include shared memory. Add --json for machine-readable output.

Works with any LLM provider

Auto-detects from the base URL, or set MNEMONIO_PROVIDER explicitly.

ProviderDetectionAuthSet via
OpenAI URL contains openai.com Bearer token MNEMONIO_API_KEY
Anthropic URL contains anthropic.com x-api-key header MNEMONIO_API_KEY
OpenRouter URL contains openrouter.ai Bearer token MNEMONIO_API_KEY
Any compatible Fallback for all other URLs Bearer token MNEMONIO_API_KEY

Give your agents a memory

Start building persistent, searchable, self-maintaining agent memory in minutes.

npm install @memel06/mnemonio