| name | research-and-report |
| version | 2.0.0 |
| description | Systematic technical research combining multi-source discovery with evidence-based analysis. Delivers authoritative recommendations backed by credible sources. Use when researching best practices, evaluating technologies, comparing approaches, discovering documentation, troubleshooting with authoritative sources, or when research, documentation, evaluation, comparison, or `--research` are mentioned. |
Research
Systematic investigation → evidence-based analysis → authoritative recommendations.
- Technology evaluation and comparison
- Documentation discovery and troubleshooting
- Best practices and industry standards research
- Implementation guidance with authoritative sources
NOT for: quick lookups, well-known patterns, time-critical debugging without investigation phase
Track with TodoWrite. Phases advance only, never regress.
| Phase | Trigger | activeForm |
|---|---|---|
| Analyze Request | Session start | "Analyzing research request" |
| Discover Sources | Criteria defined | "Discovering sources" |
| Gather Information | Sources identified | "Gathering information" |
| Synthesize Findings | Information gathered | "Synthesizing findings" |
| Compile Report | Synthesis complete | "Compiling report" |
TodoWrite format:
- Analyze Request → { scope definition }
- Discover Sources → { multi-source strategy }
- Gather Information → { key areas }
- Synthesize Findings → { comparison/evaluation }
- Compile Report → { deliverable type }
Situational (insert when triggered):
- Gather Information → if gaps discovered during synthesis
Workflow:
- Start: Create "Analyze Request" as
in_progress - Transition: Mark current
completed, add nextin_progress - Simple queries: Skip directly to "Gather Information" if unambiguous
- Gaps during synthesis: Add new "Gather Information" task
- Early termination: Skip to "Compile Report" +
△ Caveats
Five-phase systematic approach:
1. Question Phase — Define scope and success criteria
- Decision to be made?
- Evaluation parameters? (performance, maintainability, security, adoption)
- Constraints? (timeline, expertise, infrastructure)
- Context already known?
Query types:
- Installation/Setup → prerequisites, commands, configuration
- Problem Resolution → error patterns, solutions, workarounds
- API Reference → signatures, parameters, return values
- Technology Evaluation → framework comparison, tool selection
- Best Practices → industry standards, proven patterns
- Implementation → code examples, patterns, methodologies
2. Discovery Phase — Multi-source retrieval
Source selection by use case:
| Use Case | Primary | Secondary | Tertiary |
|---|---|---|---|
| Official docs | context7 | octocode | firecrawl |
| Troubleshooting | octocode issues | firecrawl community | context7 guides |
| Code examples | octocode repos | firecrawl tutorials | context7 examples |
| Technology eval | Parallel all | Cross-reference | Validate |
Progressive pattern:
- Package discovery →
octocode.packageSearch+context7.resolve-library-id - Multi-source parallel →
context7.get-library-docs+octocode.githubSearchCode+firecrawl.search - Intelligent fallback → if context7 fails, try octocode issues → firecrawl alternatives
3. Evaluation Phase — Analyze against criteria
| Criterion | Metrics |
|---|---|
| Performance | Benchmarks, latency, throughput, memory |
| Maintainability | Code complexity, docs quality, community activity |
| Security | CVEs, audits, compliance (OWASP, CWE) |
| Adoption | Downloads, production usage, industry patterns |
| Ecosystem | Integrations, plugins, tooling |
Source authority hierarchy:
- Official Documentation → creators/maintainers
- Standards Bodies → RFCs, W3C, IEEE, ISO
- Benchmark Studies → performance comparisons
- Case Studies → real-world implementations
- Community Consensus → adoption patterns, surveys
4. Comparison Phase — Systematic tradeoff analysis
Comparison matrix:
| Criterion | Option A | Option B | Option C |
|-------------|----------|----------|----------|
| Performance | 10k/s | 15k/s | 8k/s |
| Learning | Moderate | Steep | Gentle |
| Ecosystem | Large | Medium | Small |
For each option document:
- Strengths → what it excels at + evidence
- Weaknesses → limitations + edge cases
- Best fit → when this is the right choice
- Deal breakers → when this fails
5. Recommendation Phase — Clear guidance with rationale
Structure:
- Primary Recommendation → clear statement + rationale + confidence
- Alternatives → when to consider + tradeoffs
- Implementation → next steps + pitfalls + validation
- Limitations → edge cases + gaps + assumptions
Target 70–85% token reduction while maintaining accuracy.
| Query Type | Budget | Reduction |
|---|---|---|
| Quick Reference | 800 | 85% |
| Installation | 1200 | 80% |
| Troubleshooting | 1500 | 75% |
| Comprehensive | 2000 | 70% |
Techniques:
- Semantic extraction → query-relevant only
- Code prioritization → working code over verbose explanations
- Hierarchy enforcement → essential > important > nice-to-have
- Deduplication → merge similar from multiple sources
- Format optimization → bullets over paragraphs
- Aggressive boilerplate removal → navigation, ads, redundant
Validation before output:
- Version is latest stable
- Docs match user context
- Critical info cross-referenced
- Within token budget
- Code examples complete and runnable
Multi-source orchestration:
context7 — Official library documentation
resolve-library-id(name)→ get doc IDget-library-docs(id, topic)→ focused retrieval- Best for: API references, official guides
- Optimize: Use specific topics, avoid broad queries
octocode — GitHub repository intelligence
packageSearch(name)→ repo metadatagithubSearchCode(query)→ real implementationsgithubSearchIssues(query)→ troubleshootinggithubViewRepoStructure(owner/repo)→ structure- Best for: Code examples, community solutions, package discovery
firecrawl — Web documentation and community
search(query)→ web resultsscrape(url, formats=['markdown'])→ extract contentmap(url)→ discover structure before crawling- Best for: Tutorials, Stack Overflow, blog posts, benchmarks
- Optimize: Use
onlyMainContent=true,maxAgefor caching
Parallel execution pattern:
await Promise.all([
context7.resolve(name),
octocode.packageSearch(name),
firecrawl.search(query)
]).then(consolidateResults)
Fallback chain:
context7 fails → octocode issues → firecrawl alternatives
Empty docs → broader topic → web search
Rate limit → alternate MCP → manual search guidance
Library Installation
octocode.packageSearch(name)→ repo, version, depscontext7.resolve-library-id(name)→ doc IDcontext7.get-library-docs(id, topic="installation")→ official guide- Compress → commands, prerequisites, framework integration, pitfalls
Error Resolution
- Parse error → extract key terms
octocode.githubSearchIssues(pattern)→ related issuescontext7.get-library-docs(id, topic="troubleshooting")→ official fixesfirecrawl.search(error_message)→ community solutions- Synthesize → rank by authority, provide fixes, prevention
API Exploration
context7.resolve-library-id(name)→ doc IDcontext7.get-library-docs(id, topic="api")→ referenceoctocode.githubSearchCode(examples)→ real usage- Structure → options table, patterns, examples
Technology Comparison
- Parallel across all sources for each option
- Cross-reference official docs + GitHub + web
- Create matrix → quantified + qualitative factors
- Recommend → primary + alternatives + implementation
Two output modes based on research type:
Evaluation Mode (claims and recommendations):
Finding: { assertion or claim }
Source: { authoritative source with link }
Confidence: High/Medium/Low — { brief rationale }
Verify: { how to validate this finding }
Discovery Mode (searching and gathering):
Found: { what was discovered }
Source: { where it came from with link }
Notes: { relevant context or caveats }
Use Evaluation Mode when making recommendations or assertions. Use Discovery Mode when gathering options or looking things up. Mix modes as appropriate within a single research session.
## Research Summary
Brief overview — what investigated, methodology, sources consulted.
## Options Discovered
1. **Option A** — description, key characteristics
2. **Option B** — description, key characteristics
3. **Option C** — description, key characteristics
## Comparison Matrix
| Criterion | Option A | Option B | Option C |
|-----------|----------|----------|----------|
| Key metrics for easy comparison |
## Recommendation
### Primary: [Option Name]
**Rationale**: Detailed reasoning + supporting evidence
**Strengths**:
- Specific advantages with quantified data when available
**Tradeoffs**:
- Acknowledged limitations and considerations
**Confidence**: High/Medium/Low with explanation
### Alternatives
When primary may not fit:
- Scenario X → Consider Option Y because...
## Implementation Guidance
**Next Steps**:
1. Specific action items
2. Configuration recommendations
3. Validation criteria
**Common Pitfalls**:
- Pitfall 1 → How to avoid
- Pitfall 2 → How to avoid
**Migration** (if applicable):
- Path from current to recommended
## Authoritative Sources
- **Official Docs**: [Direct links]
- **Benchmarks**: [Performance comparisons]
- **Case Studies**: [Real-world examples]
- **Community**: [Discussions, blog posts]
---
> Context Usage: XXX tokens (XX% compression achieved)
> Sources: context7 | octocode | firecrawl
Always include:
- Direct citations to authoritative sources with links
- Quantified comparisons when available (metrics, statistics)
- Acknowledged limitations and edge cases
- Context about when recommendations may not apply
- Confidence levels and areas needing further investigation
Always validate:
- Version is latest stable (no alpha/beta unless requested)
- Documentation matches user's framework/language/context
- Critical information cross-referenced across sources
- Code examples complete and runnable
- No critical information lost in compression
Always consider:
- User expertise level if apparent
- Project context (languages, frameworks, infrastructure)
- Previous failed attempts mentioned
- Constraints and requirements stated
- Security implications and best practices
When detecting common patterns:
Outdated Patterns
User mentions deprecated approach
→ Flag deprecation
→ Suggest modern alternative
→ Provide migration path
Missing Prerequisites
Feature requires setup
→ Include prerequisite steps
→ Validate environment requirements
→ Provide configuration guidance
Common Pitfalls
Topic has known gotchas
→ Add prevention notes
→ Include troubleshooting tips
→ Reference common issues
Related Tools
Solution has complementary tools
→ Mention related libraries
→ Explain integration patterns
→ Provide ecosystem context
ALWAYS:
- Create "Analyze Request" todo at session start
- Update todos when transitioning phases
- One phase
in_progressat a time - Mark phases
completedbefore advancing - Use multi-source approach (context7, octocode, firecrawl)
- Provide direct citations with links
- Cross-reference critical information
- Include confidence levels and limitations
- Validate code examples are complete and runnable
- Achieve target token compression for query type
NEVER:
- Skip "Analyze Request" phase without defining scope
- Single-source when multi-source available
- Deliver recommendations without citations
- Include deprecated approaches without flagging
- Omit limitations and edge cases
- Exceed token budget without compression
- Regress phases — add new tasks if gaps discovered
- Leave "Compile Report" unmarked after delivering
- source-hierarchy.md — authority evaluation details
- compression-techniques.md — token reduction strategies
- tool-selection.md — MCP server decision matrix
- examples/ — research session examples
- FORMATTING.md — formatting conventions