| name | agentdb-memory-patterns |
| description | Apply persistent memory patterns for AI agents using AgentDB. Implement session memory, configure long-term storage, enable pattern learning, and manage context across sessions. Use when building stateful agents, creating chat systems, or designing intelligent assistants that learn from interactions. |
| version | 1.0.0 |
| category | platforms |
| tags | platforms, integration, tools |
| author | ruv |
When NOT to Use This Skill
- Local-only operations with no vector search needs
- Simple key-value storage without semantic similarity
- Real-time streaming data without persistence requirements
- Operations that do not require embedding-based retrieval
Success Criteria
- Vector search query latency: <10ms for 99th percentile
- Embedding generation: <100ms per document
- Index build time: <1s per 1000 vectors
- Recall@10: >0.95 for similar documents
- Database connection success rate: >99.9%
- Memory footprint: <2GB for 1M vectors with quantization
Edge Cases & Error Handling
- Rate Limits: AgentDB local instances have no rate limits; cloud deployments may vary
- Connection Failures: Implement retry logic with exponential backoff (max 3 retries)
- Index Corruption: Maintain backup indices; rebuild from source if corrupted
- Memory Overflow: Use quantization (4-bit, 8-bit) to reduce memory by 4-32x
- Stale Embeddings: Implement TTL-based refresh for dynamic content
- Dimension Mismatch: Validate embedding dimensions (384 for sentence-transformers) before insertion
Guardrails & Safety
- NEVER expose database connection strings in logs or error messages
- ALWAYS validate vector dimensions before insertion
- ALWAYS sanitize metadata to prevent injection attacks
- NEVER store PII in vector metadata without encryption
- ALWAYS implement access control for multi-tenant deployments
- ALWAYS validate search results before returning to users
Evidence-Based Validation
- Verify database health: Check connection status and index integrity
- Validate search quality: Measure recall/precision on test queries
- Monitor performance: Track query latency, throughput, and memory usage
- Test failure recovery: Simulate connection drops and index corruption
- Benchmark improvements: Compare against baseline metrics (e.g., 150x speedup claim)
AgentDB Memory Patterns
What This Skill Does
Use this skill to implement memory management patterns for AI agents using AgentDB's persistent storage and ReasoningBank integration. Apply these patterns to enable agents to remember conversations, learn from interactions, and maintain context across sessions. Deploy triple-layer retention (24h/7d/30d+) for optimal memory organization.
Performance: 150x-12,500x faster than traditional solutions with 100% backward compatibility.
Prerequisites
Install Node.js 18+ and AgentDB v1.0.7+. Ensure you have AgentDB via agentic-flow or standalone. Review agent architecture patterns before implementing memory systems.
Quick Start with CLI
Initialize AgentDB
Run these commands to set up your AgentDB instance with memory patterns:
# Initialize vector database
npx agentdb@latest init ./agents.db
# Or with custom dimensions
npx agentdb@latest init ./agents.db --dimension 768
# Use preset configurations
npx agentdb@latest init ./agents.db --preset large
# In-memory database for testing
npx agentdb@latest init ./memory.db --in-memory
Start MCP Server for Claude Code
# Start MCP server (integrates with Claude Code)
npx agentdb@latest mcp
# Add to Claude Code (one-time setup)
claude mcp add agentdb npx agentdb@latest mcp
Create Learning Plugin
# Interactive plugin wizard
npx agentdb@latest create-plugin
# Use template directly
npx agentdb@latest create-plugin -t decision-transformer -n my-agent
# Available templates:
# - decision-transformer (sequence modeling RL)
# - q-learning (value-based learning)
# - sarsa (on-policy TD learning)
# - actor-critic (policy gradient)
# - curiosity-driven (exploration-based)
Quick Start with API
import { createAgentDBAdapter } from 'agentic-flow/reasoningbank';
// Initialize with default configuration
const adapter = await createAgentDBAdapter({
dbPath: '.agentdb/reasoningbank.db',
enableLearning: true, // Enable learning plugins
enableReasoning: true, // Enable reasoning agents
quantizationType: 'scalar', // binary | scalar | product | none
cacheSize: 1000, // In-memory cache
});
// Store interaction memory
const patternId = await adapter.insertPattern({
id: '',
type: 'pattern',
domain: 'conversation',
pattern_data: JSON.stringify({
embedding: await computeEmbedding('What is the capital of France?'),
pattern: {
user: 'What is the capital of France?',
assistant: 'The capital of France is Paris.',
timestamp: Date.now()
}
}),
confidence: 0.95,
usage_count: 1,
success_count: 1,
created_at: Date.now(),
last_used: Date.now(),
});
// Retrieve context with reasoning
const context = await adapter.retrieveWithReasoning(queryEmbedding, {
domain: 'conversation',
k: 10,
useMMR: true, // Maximal Marginal Relevance
synthesizeContext: true, // Generate rich context
});
Memory Patterns
Apply these memory patterns to organize agent memory across multiple time horizons.
1. Session Memory
Implement session-based memory to track conversation history:
class SessionMemory {
async storeMessage(role: string, content: string) {
return await db.storeMemory({
sessionId: this.sessionId,
role,
content,
timestamp: Date.now()
});
}
async getSessionHistory(limit = 20) {
return await db.query({
filters: { sessionId: this.sessionId },
orderBy: 'timestamp',
limit
});
}
}
2. Long-Term Memory
// Store important facts
await db.storeFact({
category: 'user_preference',
key: 'language',
value: 'English',
confidence: 1.0,
source: 'explicit'
});
// Retrieve facts
const prefs = await db.getFacts({
category: 'user_preference'
});
3. Pattern Learning
// Learn from successful interactions
await db.storePattern({
trigger: 'user_asks_time',
response: 'provide_formatted_time',
success: true,
context: { timezone: 'UTC' }
});
// Apply learned patterns
const pattern = await db.matchPattern(currentContext);
Advanced Patterns
Hierarchical Memory
// Organize memory in hierarchy
await memory.organize({
immediate: recentMessages, // Last 10 messages
shortTerm: sessionContext, // Current session
longTerm: importantFacts, // Persistent facts
semantic: embeddedKnowledge // Vector search
});
Memory Consolidation
// Periodically consolidate memories
await memory.consolidate({
strategy: 'importance', // Keep important memories
maxSize: 10000, // Size limit
minScore: 0.5 // Relevance threshold
});
CLI Operations
Query Database
# Query with vector embedding
npx agentdb@latest query ./agents.db "[0.1,0.2,0.3,...]"
# Top-k results
npx agentdb@latest query ./agents.db "[0.1,0.2,0.3]" -k 10
# With similarity threshold
npx agentdb@latest query ./agents.db "0.1 0.2 0.3" -t 0.75
# JSON output
npx agentdb@latest query ./agents.db "[...]" -f json
Import/Export Data
# Export vectors to file
npx agentdb@latest export ./agents.db ./backup.json
# Import vectors from file
npx agentdb@latest import ./backup.json
# Get database statistics
npx agentdb@latest stats ./agents.db
Performance Benchmarks
# Run performance benchmarks
npx agentdb@latest benchmark
# Results show:
# - Pattern Search: 150x faster (100µs vs 15ms)
# - Batch Insert: 500x faster (2ms vs 1s)
# - Large-scale Query: 12,500x faster (8ms vs 100s)
Integration with ReasoningBank
import { createAgentDBAdapter, migrateToAgentDB } from 'agentic-flow/reasoningbank';
// Migrate from legacy ReasoningBank
const result = await migrateToAgentDB(
'.swarm/memory.db', // Source (legacy)
'.agentdb/reasoningbank.db' // Destination (AgentDB)
);
console.log(`✅ Migrated ${result.patternsMigrated} patterns`);
// Train learning model
const adapter = await createAgentDBAdapter({
enableLearning: true,
});
await adapter.train({
epochs: 50,
batchSize: 32,
});
// Get optimal strategy with reasoning
const result = await adapter.retrieveWithReasoning(queryEmbedding, {
domain: 'task-planning',
synthesizeContext: true,
optimizeMemory: true,
});
Learning Plugins
Available Algorithms (9 Total)
- Decision Transformer - Sequence modeling RL (recommended)
- Q-Learning - Value-based learning
- SARSA - On-policy TD learning
- Actor-Critic - Policy gradient with baseline
- Active Learning - Query selection
- Adversarial Training - Robustness
- Curriculum Learning - Progressive difficulty
- Federated Learning - Distributed learning
- Multi-task Learning - Transfer learning
List and Manage Plugins
# List available plugins
npx agentdb@latest list-plugins
# List plugin templates
npx agentdb@latest list-templates
# Get plugin info
npx agentdb@latest plugin-info <name>
Reasoning Agents (4 Modules)
- PatternMatcher - Find similar patterns with HNSW indexing
- ContextSynthesizer - Generate rich context from multiple sources
- MemoryOptimizer - Consolidate similar patterns, prune low-quality
- ExperienceCurator - Quality-based experience filtering
Best Practices
- Enable quantization: Use scalar/binary for 4-32x memory reduction
- Use caching: 1000 pattern cache for <1ms retrieval
- Batch operations: 500x faster than individual inserts
- Train regularly: Update learning models with new experiences
- Enable reasoning: Automatic context synthesis and optimization
- Monitor metrics: Use
statscommand to track performance
Troubleshooting
Issue: Memory growing too large
# Check database size
npx agentdb@latest stats ./agents.db
# Enable quantization
# Use 'binary' (32x smaller) or 'scalar' (4x smaller)
Issue: Slow search performance
# Enable HNSW indexing and caching
# Results: <100µs search time
Issue: Migration from legacy ReasoningBank
# Automatic migration with validation
npx agentdb@latest migrate --source .swarm/memory.db
Performance Characteristics
- Vector Search: <100µs (HNSW indexing)
- Pattern Retrieval: <1ms (with cache)
- Batch Insert: 2ms for 100 patterns
- Memory Efficiency: 4-32x reduction with quantization
- Backward Compatibility: 100% compatible with ReasoningBank API
Learn More
- GitHub: https://github.com/ruvnet/agentic-flow/tree/main/packages/agentdb
- Documentation: node_modules/agentic-flow/docs/AGENTDB_INTEGRATION.md
- MCP Integration:
npx agentdb@latest mcpfor Claude Code - Website: https://agentdb.ruv.io
Core Principles
AgentDB Memory Patterns operates on 3 fundamental principles:
Principle 1: Triple-Layer Memory Retention
Agent memory organizes across three time horizons: immediate (10 messages), short-term (session context), and long-term (persistent facts), mirroring human memory architecture.
In practice:
- Store recent messages in immediate memory for context window management
- Persist session-level context (goals, preferences) in short-term memory
- Extract important facts to long-term memory for cross-session retrieval
- Consolidate memories periodically to prevent unbounded growth
Principle 2: Pattern Learning via Reinforcement
Agents improve through experience by storing interaction patterns and training on successful/failed attempts using 9 RL algorithms.
In practice:
- Record trigger-response patterns with success indicators and context
- Train learning plugins (Decision Transformer, Q-Learning, Actor-Critic) on pattern corpus
- Retrieve proven patterns via vector similarity for new situations
- Update pattern confidence scores based on real-world performance
Principle 3: Reasoning-Augmented Retrieval
Context synthesis combines multiple memory sources with reasoning agents (PatternMatcher, ContextSynthesizer, MemoryOptimizer) for rich contextual understanding.
In practice:
- Use
retrieveWithReasoning()instead of raw vector search - Enable
synthesizeContext: trueto generate coherent context from fragments - Apply Maximal Marginal Relevance (MMR) to reduce redundancy in results
- Leverage ExperienceCurator to filter low-quality memories
Common Anti-Patterns
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Flat Memory Structure | Storing all memories at same priority causes retrieval noise and unbounded growth | Implement hierarchical memory with immediate/short-term/long-term layers |
| No Memory Consolidation | Keeping all interactions forever exhausts storage and degrades retrieval quality | Run periodic consolidation with importance-based pruning and size limits |
| Raw Vector Search | Retrieving embeddings without reasoning misses contextual relationships | Use retrieveWithReasoning() with synthesizeContext: true for richer results |
| Ignoring Learning Loops | Never training on stored patterns wastes collected experience | Train learning plugins regularly with adapter.train() after accumulating experiences |
| Storing Everything | Recording trivial interactions pollutes memory with low-value data | Filter by importance/relevance; store only interactions with confidence >0.5 |
Conclusion
The AgentDB Memory Patterns skill enables stateful AI agents through persistent cross-session memory with triple-layer retention and pattern learning capabilities. By combining HNSW-accelerated retrieval, 9 RL algorithms, and reasoning-augmented context synthesis, it provides 150x-12,500x faster memory operations than traditional solutions.
Use this skill when building conversational agents, creating intelligent assistants that learn from interactions, or implementing systems requiring context persistence across sessions. The pattern library excels at chat systems, task-planning agents, and recommendation systems where historical interactions inform future decisions.
Key takeaways: Organize memory hierarchically across time horizons, train learning plugins on accumulated patterns, and leverage reasoning agents for context synthesis. The 100% backward compatibility with ReasoningBank and seamless Claude Code integration via MCP makes AgentDB Memory Patterns the production-ready choice for stateful agent development.