Claude Code Plugins

Community-maintained marketplace

Feedback

Long-term memory system for Claude Code using HelixDB graph-vector database. Store and retrieve facts, preferences, context, and relationships across sessions using semantic search, reasoning chains, and time-window filtering.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name helix-memory
description Long-term memory system for Claude Code using HelixDB graph-vector database. Store and retrieve facts, preferences, context, and relationships across sessions using semantic search, reasoning chains, and time-window filtering.
domain memory
type system
frequency daily
commands memory, recall

Helix Memory - Long-Term Memory for Claude Code

Store and retrieve persistent memory across sessions using HelixDB's graph-vector database. Features semantic search (via Ollama), reasoning chains (IMPLIES/CONTRADICTS/BECAUSE), time-window filtering, and hybrid search.

IMPORTANT: Always Use the Bash CLI

ALWAYS use the memory bash script - never call Python scripts directly.

The memory script is located in the helix-memory repo root. After installation, use it directly or via alias:

# Direct path (update to your install location)
/path/to/helix-memory/memory <command>

# Or if you set up an alias
memory <command>

Service Commands (Start/Stop)

# Start HelixDB (auto-starts Docker Desktop if needed)
memory start

# Stop HelixDB
memory stop

# Restart
memory restart

# Check status
memory status

Memory Commands

# Search memories
memory search "topic"

# List all (sorted by importance)
memory list --limit 10

# Quick store with auto-categorization (uses Ollama/Gemini)
memory remember "User prefers FastAPI over Flask"

# Store with explicit category
memory store "content" -t preference -i 9 -g "tags"

# Delete by ID (prefix OK)
memory delete abc123

# Find by tag
memory tag "wordpress"

# Help
memory help

Python API (For hooks/advanced use only)

The common.py module provides high-level functions:

import sys
sys.path.insert(0, '/path/to/helix-memory/hooks')
from common import (
    # Storage
    store_memory, store_memory_embedding, generate_embedding,
    # Retrieval
    get_all_memories, get_high_importance_memories,
    # Search
    search_by_similarity, search_by_text, hybrid_search,
    get_memories_by_time_window,
    # Reasoning chains
    create_implication, create_contradiction, create_causal_link, create_supersedes,
    get_implications, get_contradictions, get_reasoning_chain,
    # Utils
    check_helix_running, ensure_helix_running
)

Key Features

1. Semantic Search (Ollama)

Real vector similarity using nomic-embed-text model:

# Search finds semantically related content, not just keywords
results = search_by_similarity("verify code works", k=5)
# Finds: "test before completing" even without keyword match

2. Time-Window Search

Filter memories by recency:

# Time windows: "recent" (4h), "contextual" (30d), "deep" (90d), "full" (all)
recent = get_memories_by_time_window("recent")      # Last 4 hours
contextual = get_memories_by_time_window("contextual")  # Last 30 days
all_time = get_memories_by_time_window("full")      # Everything

3. Hybrid Search

Combines vector similarity + text matching for best results:

results = hybrid_search("python testing preferences", k=10, window="contextual")

4. Reasoning Chains (Graph Power!)

Create logical relationships between memories:

# "prefers Python" IMPLIES "avoid Node.js suggestions"
create_implication(python_pref_id, avoid_node_id, confidence=9, reason="Language preference")

# "always use tabs" CONTRADICTS "always use spaces"
create_contradiction(tabs_id, spaces_id, severity=8, resolution="newer_wins")

# "migrated to FastAPI" BECAUSE "Flask too slow"
create_causal_link(fastapi_id, flask_slow_id, strength=9)

# New preference SUPERSEDES old one
create_supersedes(new_pref_id, old_pref_id)

Query reasoning chains:

implications = get_implications(memory_id)    # What does this imply?
contradictions = get_contradictions(memory_id)  # What conflicts with this?
chain = get_reasoning_chain(memory_id)        # Full reasoning graph

Memory Categories

Category Importance Description
preference 7-10 User preferences that guide interactions
fact 5-9 Factual info about user/projects/environment
context 4-8 Project/domain background
decision 6-10 Architectural decisions with rationale
task 3-9 Ongoing/future tasks
solution 6-9 Bug fixes, problem solutions

Storing Memories

Basic Storage

memory_id = store_memory(
    content="User prefers Python over Node.js for backend",
    category="preference",
    importance=9,
    tags="python,nodejs,backend,language",
    source="session-abc123"  # or "manual"
)

With Semantic Embedding

# Generate real embedding via Ollama
vector, model = generate_embedding(content)

# Store embedding for semantic search
store_memory_embedding(memory_id, vector, content, model)

Retrieving Memories

Get All/Filtered

all_mems = get_all_memories()
important = get_high_importance_memories(min_importance=8)
prefs = [m for m in all_mems if m.get('category') == 'preference']

Search

# Semantic (finds related meanings)
results = search_by_similarity("testing workflow", k=10)

# Text (exact substring match)
results = search_by_text("pytest")

# Hybrid (best of both)
results = hybrid_search("python testing", k=10, window="contextual")

Schema Overview

Nodes

  • Memory: content, category, importance, tags, source, created_at
  • MemoryEmbedding: vector (1536-dim), content, model
  • Context: name, description, context_type
  • Concept: name, concept_type, description

Reasoning Edges

  • Implies: Memory → Memory (confidence, reason)
  • Contradicts: Memory → Memory (severity, resolution)
  • Because: Memory → Memory (strength)
  • Supersedes: Memory → Memory (superseded_at)

Structural Edges

  • HasEmbedding: Memory → MemoryEmbedding
  • BelongsTo: Memory → Context
  • RelatedToConcept: Memory → Concept
  • RelatesTo: Memory → Memory (generic)

REST API Endpoints

All endpoints: POST http://localhost:6969/{endpoint} with JSON body.

Storage

# Store memory
curl -X POST http://localhost:6969/StoreMemory -H "Content-Type: application/json" \
  -d '{"content":"...", "category":"preference", "importance":9, "tags":"...", "source":"manual"}'

# Create implication
curl -X POST http://localhost:6969/CreateImplication -H "Content-Type: application/json" \
  -d '{"from_id":"...", "to_id":"...", "confidence":8, "reason":"..."}'

Retrieval

# Get all memories
curl -X POST http://localhost:6969/GetAllMemories -H "Content-Type: application/json" -d '{}'

# Get implications
curl -X POST http://localhost:6969/GetImplications -H "Content-Type: application/json" \
  -d '{"memory_id":"..."}'

# Vector search
curl -X POST http://localhost:6969/SearchBySimilarity -H "Content-Type: application/json" \
  -d '{"query_vector":[...], "k":10}'

Automatic Memory (Hooks)

Memory storage/retrieval happens automatically via Claude Code hooks:

  • UserPromptSubmit (load_memories.py): Loads relevant memories before processing
  • Stop (reflect_and_store.py): Analyzes conversation, stores important items (every 5 prompts)
  • SessionStart (session_start.py): Initializes session context

What Gets Auto-Stored

  • Explicit: "remember this:", "store this:"
  • Preferences: "I prefer...", "always use...", "never..."
  • Decisions: "decided to...", "let's use..."
  • Bug fixes: "the issue was...", "fixed by..."

CLI Reference

# Service
memory start      # Start HelixDB (auto-starts Docker Desktop)
memory stop       # Stop HelixDB
memory restart    # Restart HelixDB
memory status     # Check status and memory count

# Memory operations
memory search "pytest"
memory list --limit 10
memory remember "User prefers pytest over unittest"
memory store "content" -t category -i importance -g "tags"
memory delete <memory-id>
memory tag "tagname"
memory help

Project Tagging

Memories are automatically tagged with project names based on working directory. Project detection uses directory name as fallback.

Ollama Setup (For Real Semantic Search)

# Start Ollama service
brew services start ollama

# Pull embedding model (274MB)
ollama pull nomic-embed-text

# Verify
curl http://localhost:11434/api/tags

Without Ollama, falls back to Gemini API (if key set) or hash-based pseudo-embeddings.

Best Practices

DO:

  • Store preferences immediately when expressed
  • Use reasoning chains to link related memories
  • Set appropriate importance (10=critical, 7-9=high, 4-6=medium, 1-3=low)
  • Use hybrid_search for best recall
  • Filter by time window to prioritize recent info

DON'T:

  • Store code snippets (use codebase)
  • Store sensitive data (passwords, keys)
  • Create duplicate memories (use find_similar_memories first)
  • Forget embeddings (needed for semantic search)

Troubleshooting

DB Won't Start

# Use the memory script (handles Docker auto-start)
memory start

# Check container status
docker ps | grep helix

Ollama Not Working

brew services restart ollama
ollama list  # Should show nomic-embed-text

Vector Dimension Errors

HelixDB expects 1536-dim vectors. The code auto-pads smaller embeddings.

Check Logs

docker logs $(docker ps -q --filter "name=helix-memory") 2>&1 | tail -20

Resources