| name | search |
| description | Comprehensive search guidance (CLI + MCP tools). Use when searching local files, web content, library docs, or researching topics. Covers semantic search (CLI), web search (Brave/Exa), documentation (Context7), AI answers (Perplexity), URL analysis (WebFetch), or when user mentions search, find, lookup, research, Brave, Exa, Context7, Perplexity. |
Search Tools
Unified guidance for all search operations: local semantic search, web search, documentation lookup, AI-powered research, and URL analysis.
Capabilities
Local Search:
- Semantic similarity search in local files/docs (CLI
search) - Exact pattern matching (Grep tool)
- File discovery by pattern (Glob tool)
Web Search:
- General web search with diverse sources (Brave MCP)
- AI-native search with content scraping (Exa MCP)
- URL content analysis (WebFetch tool)
Documentation Search:
- Library/framework documentation (Context7 MCP)
- Code examples and API context (Exa code_context MCP)
- Official docs via URLs (WebFetch tool)
AI-Powered Research:
- Quick factual answers (Perplexity search MCP)
- Complex multi-step analysis (Perplexity reason MCP)
- Comprehensive research reports (Perplexity deep_research MCP)
- Multi-agent parallel research (/research command)
Code Search:
- Structural code patterns (ast-grep via cli-dev)
- API/SDK documentation (Exa code_context MCP)
- Library-specific docs (Context7 MCP)
Quick Reference
Decision Matrix - When to use which tool:
| Use Case | First Choice | Alternative | When Alternative | Trade-offs |
|---|---|---|---|---|
| Local file similarity | CLI search |
Grep(pattern) |
Exact pattern needed | Semantic vs regex |
| Find project notes | CLI search |
Basic Memory | Stored insights | Ephemeral vs persistent |
| General web search | Brave | Exa web_search | AI-native needed | Breadth vs depth |
| Library docs lookup | Context7 | Exa code_context | Unknown library name | Known vs discovery |
| Quick factual answer | Perplexity search | Brave | Need sources/citations | Speed vs verification |
| Code examples/patterns | Exa code_context | Context7 | Known library | Discovery vs direct |
| Complex analysis | Perplexity reason | Perplexity deep_research | Time constraints | Quality vs speed |
| Comprehensive research | Perplexity deep_research | /research command |
Multi-agent needed | Depth vs breadth |
| API/SDK documentation | Exa code_context | Context7 | Known library name | Fresh vs canonical |
| Local codebase search | Grep(pattern) |
CLI search |
Semantic similarity | Exact vs fuzzy |
| News/current events | Brave | Perplexity search | Need AI summary | Raw vs interpreted |
| Compare alternatives | Perplexity reason | Perplexity deep_research | Detailed analysis | Quick vs thorough |
| Specific URL analysis | WebFetch | Brave → WebFetch | Don't have URL yet | Have URL vs discovery |
| Multi-page docs | Context7 | WebFetch (multiple) | Unstructured docs | Structured vs manual |
Tool Overview
CLI Tools
search (semtools):
- Semantic similarity search using local embeddings
- Fast keyword search with cosine similarity matching
- No API calls, works offline
- Use for: Finding conceptually similar content in local files
Grep (Claude Code tool):
- Exact regex pattern matching
- No approval needed, built-in validation
- Use for: Precise content search with known patterns
WebFetch (Claude Code tool):
- Fetch and analyze specific URL content
- Converts HTML to markdown, processes with AI
- Use for: Deep analysis of known URLs
MCP Tools
Brave (brave-search):
- General web search with diverse sources
- Supports filters: news, videos, discussions, locations
- Country-specific, freshness filters, safesearch
- Use for: Broad web searches, multiple perspectives
Exa (exa):
- web_search_exa: AI-native search with content scraping
- get_code_context_exa: Programming task context (APIs/SDKs/libraries)
- High-quality, curated results
- Use for: Code-related queries, AI-optimized search
Context7 (context7):
- resolve-library-id: Convert library name → Context7 ID
- get-library-docs: Fetch structured documentation
- 2-step workflow required
- Use for: Known libraries (React, Next.js, Vue, etc.)
Perplexity (perplexity):
- search: Quick answers (Sonar Pro model)
- reason: Complex multi-step analysis (Sonar Reasoning Pro)
- deep_research: Comprehensive reports (Sonar Deep Research)
- All require extremely specific queries
- Use for: AI-synthesized answers, reasoning, research
When to Use Each Tool
Local Search: CLI search vs Grep
Prefer CLI search when:
- Finding conceptually similar content ("docs about authentication")
- Semantic similarity needed (fuzzy matching)
- Searching by meaning, not exact words
- Distance threshold control needed (
--max-distance)
Prefer Grep when:
- Exact pattern matching required
- Regex capabilities needed
- Performance critical (Grep is optimized)
- Integration with other Claude tools
Example decision:
- ✅ CLI
search: "Find files discussing error handling patterns" - ✅ Grep: "Find all files containing 'handleError' function"
Web Search: Brave vs Exa vs WebFetch
Prefer Brave when:
- General web search needed
- Want diverse sources (news, forums, videos, etc.)
- Location-specific results important
- Need freshness filters (last day/week/month)
Prefer Exa when:
- AI-native, curated results preferred
- Content scraping/extraction needed
- Code-specific queries (use get_code_context_exa)
- Quality over quantity
Prefer WebFetch when:
- Already have specific URL
- Need full page content analysis
- Single-page deep dive required
- Custom prompt for content extraction
Example decisions:
- ✅ Brave: "What's the latest on React 19 release?"
- ✅ Exa: "Find high-quality articles about TypeScript generics"
- ✅ WebFetch: "Analyze this GitHub issue: https://github.com/..."
Documentation: Context7 vs Exa code_context
Prefer Context7 when:
- Library name known (React, Next.js, Vue, MongoDB, etc.)
- Want canonical, structured documentation
- Topic-specific docs needed (routing, hooks, etc.)
- Trust score important (7-10 range)
Prefer Exa code_context when:
- Library unknown, need discovery
- Want code examples alongside docs
- Fresh/recent content preferred
- Broader API/SDK context needed
Example decisions:
- ✅ Context7: "Show Next.js routing documentation"
- ✅ Exa: "Find docs for GraphQL query optimization"
AI Research: Perplexity Tools
search (quick answers):
- Factual questions needing fast response
- "What is X?", "How does Y work?"
- Simple lookups, definitions
- Sonar Pro model (fast)
reason (complex analysis):
- Multi-step reasoning needed
- Comparisons, explanations, trade-offs
- "Compare X vs Y for Z use case"
- Sonar Reasoning Pro model (thoughtful)
deep_research (comprehensive reports):
- In-depth topic exploration
- Multiple perspectives needed
- Detailed reports with focus areas
- Sonar Deep Research model (thorough, slow)
Example decisions:
- ✅ search: "What is React Server Components?"
- ✅ reason: "Compare Redux vs Zustand for e-commerce app state management"
- ✅ deep_research: "Event-driven architecture patterns for microservices"
Multi-Agent Research: Perplexity vs /research
Prefer Perplexity deep_research when:
- Single-agent sufficient
- Narrower scope
- 5-10 minute response acceptable
- Focus areas can be specified
Prefer /research command when:
- Multi-agent parallel research needed
- Broader topic exploration
- 3-20 workers for comprehensive coverage
- Orchestrator-worker pattern beneficial
Prescriptive Guidance
Always
Context7 workflow:
- ALWAYS call
resolve-library-idbeforeget-library-docs - UNLESS user provides library ID in format
/org/projector/org/project/version
Perplexity queries:
- ALWAYS include specific details:
- Exact error messages, logs, stack traces
- Version numbers (framework, language, tools)
- Code snippets showing context
- Platform, OS, environment details
- Attempted solutions
CLI search thresholds:
- ALWAYS use
--max-distancefor threshold control - Typical range: 0.3-0.7 (lower = more similar)
- Default is too permissive for most use cases
Prefer
Local search:
- Prefer CLI
searchfor conceptual/semantic queries - Prefer
Grepfor exact pattern matching - Prefer Basic Memory for persistent stored insights
Web search:
- Prefer Brave for broad, diverse sources
- Prefer Exa for AI-curated, quality-focused results
- Prefer WebFetch when you have specific URL
- Prefer Perplexity when you need AI synthesis/analysis
Documentation:
- Prefer Context7 for known libraries (React, Next.js, etc.)
- Prefer Exa code_context for discovering APIs/SDKs
- Prefer official docs (via WebFetch) for canonical reference
Research:
- Prefer Perplexity search for quick factual answers
- Prefer Perplexity reason for complex analysis/comparisons
- Prefer Perplexity deep_research for comprehensive reports
- Prefer
/researchcommand for multi-agent parallel research
Avoid
Tool misuse:
- Avoid CLI
searchfor exact pattern matching (use Grep) - Avoid Brave for code-specific queries (use Exa code_context)
- Avoid Perplexity deep_research for time-sensitive queries (slow)
- Avoid WebFetch for discovery (use search tools first)
Context7 errors:
- Avoid skipping resolve-library-id step
- Avoid guessing library ID format
Perplexity quality:
- Avoid vague queries ("tell me about React")
- Avoid queries without version/context specifics
Resources
Detailed tool documentation (resources/):
search-cli.md - CLI semantic search (semtools)
- When to use vs Grep (semantic vs regex)
- Common flags:
--max-distance,--top-k,--n-lines - Bash patterns for Claude Code
- Use cases and examples
brave.md - Brave web search
- When to use (general web search)
- Parameters: query, count, country, freshness, filters
- Result filtering (news, videos, discussions, locations)
- vs Exa, WebFetch, Perplexity
exa.md - Exa AI-native search
- web_search_exa (real-time web + scraping)
- get_code_context_exa (programming context)
- When vs Brave, Context7
- Code-specific query patterns
context7.md - Context7 library documentation
- 2-step workflow: resolve-library-id → get-library-docs
- Selection criteria (name match, trust score)
- Library ID format (
/org/project) - When vs Exa code_context
perplexity.md - Perplexity AI search/reasoning
- search (Sonar Pro - quick answers)
- reason (Sonar Reasoning Pro - complex analysis)
- deep_research (Sonar Deep Research - comprehensive reports)
- Query specificity requirements
- When vs /research command
Progressive Disclosure Strategy
Token efficiency:
Current state (MCP tools always loaded):
- 8 MCP search tools × ~125 tokens/tool = ~1,000 tokens baseline
With Skill(search):
- Metadata only: ~100 tokens (90% reduction)
- Metadata + Quick Reference: ~300 tokens (70% reduction)
- Metadata + 1 resource: ~500 tokens (50% reduction)
- Metadata + 2-3 resources: ~800 tokens (20% reduction)
- Full skill load: ~1,200 tokens (comprehensive guidance)
Typical session patterns:
- 60% sessions: Metadata only (no search) = 100 tokens (90% savings)
- 25% sessions: Metadata + 1 resource = 500 tokens (50% savings)
- 10% sessions: Metadata + 2-3 resources = 800 tokens (20% savings)
- 5% sessions: Full load = 1,200 tokens (comprehensive)
Weighted average: ~300 tokens/session (70% reduction from baseline)
Loading strategy:
Metadata triggers on:
- "search", "find", "lookup", "research"
- Tool names: "Brave", "Exa", "Context7", "Perplexity"
- Use cases: "web search", "docs", "library documentation"
Resources load on-demand:
- search-cli.md: "semantic search", "CLI search", mentions CLI
searchcommand - brave.md: "Brave", "web search", "brave_web_search"
- exa.md: "Exa", "code context", "get_code_context_exa", "web_search_exa"
- context7.md: "Context7", "library docs", "resolve-library-id"
- perplexity.md: "Perplexity", "Sonar Pro", "reason", "deep_research"
Integration Points
CLAUDE.md Tool Selection:
- CLI
searchrow exists (line 29): Semantic search →search "query" filesvsGrep(pattern) - References this skill for complete search guidance
Related skills:
- cli-dev: File operations (fd for file discovery, complements semantic search)
- bash-patterns: CLI search orchestration patterns
- research:
/researchcommand for multi-agent vs Perplexity single-agent - codex: Codex CLI vs Perplexity (different models/use cases)
Commands:
- /research: Multi-agent research (3-20 workers) vs Perplexity deep_research
MCP tools consolidated:
- brave-search: brave_web_search
- exa: web_search_exa, get_code_context_exa
- context7: resolve-library-id, get-library-docs
- perplexity: search, reason, deep_research
Cross-references:
- See Skill("cli-dev") for ast-grep (structural code search)
- See Skill("bash-patterns") for CLI search approval optimization
- See
/researchcommand for multi-agent parallel research
Usage Examples
Example 1: Local semantic search
User: "Find files discussing authentication flow"
→ Skill(search) loads (metadata + Quick Reference)
→ Decision Matrix: Local file similarity → CLI `search`
→ Load resources/search-cli.md
→ Execute: search "authentication flow" docs/ --max-distance 0.5
Example 2: Library documentation
User: "Show me Next.js routing docs"
→ Skill(search) loads
→ Decision Matrix: Library docs lookup → Context7
→ Load resources/context7.md
→ Execute:
1. resolve-library-id(libraryName="Next.js") → "/vercel/next.js"
2. get-library-docs(context7CompatibleLibraryID="/vercel/next.js", topic="routing")
Example 3: Code context discovery
User: "Find examples of GraphQL query optimization"
→ Skill(search) loads
→ Decision Matrix: Code examples/patterns → Exa code_context
→ Load resources/exa.md
→ Execute: get_code_context_exa(query="GraphQL query optimization examples", tokensNum=5000)
Example 4: Complex reasoning
User: "Compare Redux vs Zustand for React state management"
→ Skill(search) loads
→ Decision Matrix: Complex analysis → Perplexity reason
→ Load resources/perplexity.md
→ Execute: reason(query="Compare Redux vs Zustand for React state management. Context: e-commerce app, 50+ components, TypeScript. Concerns: bundle size, DevEx, TypeScript support.")
Example 5: URL analysis
User: "Analyze this GitHub issue: https://github.com/..."
→ Skill(search) loads
→ Decision Matrix: Specific URL analysis → WebFetch
→ No resource load needed (built-in tool)
→ Execute: WebFetch(url="https://github.com/...", prompt="Summarize issue and proposed solutions")
Success Metrics
Trigger accuracy:
95% auto-load when search/research mentioned
- No collisions with other skills
- Specific tool names as secondary triggers
Token efficiency:
- Metadata: ~100 tokens (always loaded)
- SKILL.md: ~500 tokens (skill invoked)
- Resources: ~500-800 tokens (on-demand)
- vs baseline: 70% typical savings (1,000 → 300 tokens)
Prescriptive guidance quality:
- All 5 resources have "When to Use", "Prescriptive Guidance", "Examples"
- Clear preference statements ("prefer X for Y")
- Trade-off discussions (X vs Y trade-offs)
Progressive disclosure effectiveness:
- Resources load only when specific tool mentioned
- Quick Reference sufficient for tool selection
- Detailed guidance available on-demand
Usability:
- Decision Matrix provides immediate tool selection (12+ use cases)
- Prescriptive guidance eliminates guesswork
- Examples show complete workflows
Notes
Type: Guidance-Only + MCP Wrapper (no scripts, tools external)
Portability: Self-contained .claude/skills/search/ folder (shareable)
Tool installation:
- CLI
search: Install viacargo install semtools - MCP tools: Configured in
.mcp.json(external services) - WebFetch, Grep, Glob: Built-in Claude Code tools
Pattern: Type 6 Guidance-Only (matches cli-dev, cli-doc, memgraph patterns)
Quality target: 9/10-9.5/10 (DRY, token efficiency, accuracy, completeness, format)