| name | reconnaissance |
| description | Systematic technology and market reconnaissance for extracting actionable intelligence from repositories, papers, and competitive landscapes. |
LIBRARY-FIRST PROTOCOL (MANDATORY)
Before writing ANY code, you MUST check:
Step 1: Library Catalog
- Location:
.claude/library/catalog.json - If match >70%: REUSE or ADAPT
Step 2: Patterns Guide
- Location:
.claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.md - If pattern exists: FOLLOW documented approach
Step 3: Existing Projects
- Location:
D:\Projects\* - If found: EXTRACT and adapt
Decision Matrix
| Match | Action |
|---|---|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern exists | FOLLOW pattern |
| In project | EXTRACT |
| No match | BUILD (add to library after) |
TIER 1: CRITICAL SECTIONS
Overview
Reconnaissance is a systematic intelligence-gathering skill designed to extract actionable insights from diverse sources including GitHub repositories, academic papers, competitive products, and market landscapes. Unlike general research which synthesizes existing knowledge, reconnaissance actively probes, extracts, and structures novel intelligence for strategic decision-making.
The skill operates on the principle that raw information must be transformed through structured extraction into decision-ready artifacts. It employs a three-phase methodology: Discovery (identify sources), Extraction (pull structured data), and Analysis (synthesize intelligence).
Reconnaissance is essential for:
- Pre-project feasibility assessments
- Technology stack evaluation
- Competitive positioning
- Academic literature mining for implementation patterns
- Market opportunity identification
Core Principles
Reconnaissance operates on 5 fundamental principles:
Principle 1: Source Diversification
Never rely on a single source type. Cross-reference findings across repositories, papers, documentation, and market signals.
In Practice:
- Query GitHub + arXiv + product pages for any technology assessment
- Compare stated claims against actual implementations
- Verify recency and maintenance status of all sources
Principle 2: Structured Extraction
Raw information is useless without structure. Every reconnaissance output must follow defined schemas.
In Practice:
- Use MANIFEST.md templates for repository extractions
- Create comparison charts for multi-source analysis
- Tag all findings with source, date, and confidence level
Principle 3: Evidence Grounding
Every claim must trace back to a verifiable source with explicit confidence ceiling.
In Practice:
- Link directly to code files, paper sections, or documentation pages
- Apply confidence ceilings: witnessed (0.95), reported (0.70), inferred (0.70)
- Mark gaps explicitly rather than inferring without evidence
Principle 4: Actionable Output
Intelligence must drive decisions. Abstract findings without recommendations waste effort.
In Practice:
- End every recon with specific recommendations
- Include effort estimates and risk assessments
- Prioritize findings by strategic impact
Principle 5: Temporal Awareness
Technology landscapes change rapidly. All reconnaissance has an expiration date.
In Practice:
- Date-stamp all outputs prominently
- Note last-commit dates for repositories
- Flag papers older than 2 years as potentially outdated
When to Use
Use Reconnaissance When:
- Evaluating a new technology for adoption
- Analyzing competitor products or approaches
- Extracting implementation patterns from papers
- Assessing open-source project viability
- Mapping market landscape before product decisions
- Pre-mortem analysis for project risks
Do NOT Use Reconnaissance When:
- Synthesizing already-gathered sources (use
literature-synthesis) - Single-source deep reading (use
academic-reading-workflow) - General question answering (use
researcher) - Implementation planning (use
research-driven-planning)
Main Workflow
Phase 1: Scope Definition
Agent: intent-analyzer Purpose: Clarify reconnaissance objectives and boundaries
Inputs:
- Target domain or technology
- Strategic questions to answer
- Time and depth constraints
- Output format requirements
Outputs:
- Scoped reconnaissance brief
- Source categories to probe
- Success criteria
Phase 2: Source Discovery
Agent: research-agent Purpose: Identify and qualify sources
Process:
- GitHub search for repositories matching domain
- arXiv/papers search for academic sources
- Product/company identification for competitive analysis
- Documentation and blog discovery
- Source qualification (recency, authority, relevance)
Outputs:
- Qualified source list with metadata
- Source-type distribution analysis
- Gap identification (what sources are missing?)
Phase 3: Structured Extraction
Agent: code-extractor or document-analyst Purpose: Pull structured data from each source
Process:
- Apply extraction template per source type:
- Repository: MANIFEST.md (structure, deps, patterns)
- Paper: KEY-FINDINGS.md (methods, results, code availability)
- Product: FEATURE-MATRIX.md (capabilities, pricing, integration)
- Normalize extracted data to common schema
- Cross-reference overlapping claims
Outputs:
- Per-source extraction documents
- Normalized data tables
- Cross-reference matrix
Phase 4: Intelligence Synthesis
Agent: synthesis-agent Purpose: Transform extractions into actionable intelligence
Process:
- Identify patterns across sources
- Surface contradictions and gaps
- Generate comparison charts
- Formulate recommendations with confidence ceilings
- Create decision frameworks
Outputs:
- COMPREHENSIVE-ANALYSIS.md
- COMPARISON-CHART.md
- RECOMMENDATIONS.md with action items
- GAP-INVENTORY.md with specific missing capabilities
Phase 5: Gap-Filling Extraction (CRITICAL)
Agent: code-extractor + research-agent Purpose: Find external repos/research to fill discovered gaps, extract ONLY applicable parts, delete everything else
Process:
- Gap Identification: Take gaps from Phase 4 GAP-INVENTORY.md
- Source Discovery: Search GitHub/arXiv/papers for solutions to each gap
- Pre-Mortem Analysis: Before cloning, assess applicability:
- Does target project already have this capability?
- Is the architecture compatible?
- What's the integration effort?
- Selective Clone: Clone promising repos to temp directory
- Extraction Criteria: For each cloned repo, evaluate:
KEEP if: - Directly fills identified gap - Architecture compatible with target - Integration effort < gap severity DELETE if: - Target already has equivalent functionality - Architecture incompatible - Duplicates existing capabilities - Integration effort exceeds value - Focused Extraction: Extract ONLY applicable patterns:
- Copy relevant source files to extraction folder
- Create integration notes per extracted component
- Map extracted code to target project structure
- Cleanup: Delete cloned repos after extraction complete
- Extraction Report: Document what was kept, what was deleted, and why
Outputs:
{target}-gap-extractions/folder with:- Extracted source files (only applicable ones)
EXTRACTION-MANIFEST.md- what was extracted and whyINTEGRATION-GUIDE.md- how to integrate each componentDELETED-SUMMARY.md- what was NOT applicable (with reasons)
- Updated GAP-INVENTORY.md with extraction status
Critical Rules:
- NEVER extract without pre-mortem applicability analysis
- ALWAYS delete more than you keep (80%+ deletion rate is healthy)
- Document every deletion decision for future reference
- Map extractions to specific gaps they fill
Phase 6: Delivery and Storage
Agent: delivery-agent Purpose: Package and preserve intelligence
Process:
- Organize outputs in dated folder structure:
{target}-recon-{date}/ EXECUTIVE-SUMMARY.md MANIFEST.md COMPREHENSIVE-ANALYSIS.md COMPARISON-CHART.md RECOMMENDATIONS.md GAP-INVENTORY.md gap-extractions/ EXTRACTION-MANIFEST.md INTEGRATION-GUIDE.md DELETED-SUMMARY.md extracted-files/ - Update memory-mcp with key findings
- Generate executive summary
- Archive raw extractions for audit trail
Outputs:
- Complete reconnaissance package with gap extractions
- Memory-mcp entries for future retrieval
- Executive brief for stakeholders
- Actionable extraction folder ready for integration
TIER 2: ESSENTIAL SECTIONS
Pattern Recognition
Different reconnaissance targets require different approaches:
Technology Evaluation Pattern
Triggers: "evaluate", "assess", "should we use", "compare frameworks" Characteristics:
- Focus on implementation feasibility
- Heavy repository analysis
- Performance and scalability concerns Key Focus: Code quality, maintenance activity, community health Approach: Repository-first with paper backup for theoretical grounding
Competitive Analysis Pattern
Triggers: "competitor", "market position", "alternative to" Characteristics:
- Product feature comparison
- Pricing and positioning analysis
- Strategic differentiation needs Key Focus: Feature gaps, market positioning, integration capabilities Approach: Product documentation and public materials analysis
Academic Extraction Pattern
Triggers: "paper", "research", "implementation from", "replicate" Characteristics:
- Method and algorithm focus
- Code availability critical
- Reproducibility concerns Key Focus: Implementation details, performance claims, code artifacts Approach: Paper-first with repository search for implementations
Market Landscape Pattern
Triggers: "landscape", "players in", "market for", "opportunity" Characteristics:
- Broad coverage over depth
- Trend identification
- Segment analysis Key Focus: Market size, key players, emerging trends, gaps Approach: Multi-source sweep with structured aggregation
Advanced Techniques
Multi-Model Reconnaissance
Route different extraction tasks to optimal models:
- Gemini: Large repository analysis (1M context), Google Search grounding
- Codex: Automated code extraction and pattern mining
- Claude: Synthesis and strategic analysis
# Example multi-model routing
bash -lc "gemini --all-files 'extract architecture patterns from this repo'"
bash -lc "codex exec 'analyze dependency graph and identify risks'"
Temporal Layering
Compare sources across time to identify trends:
- Historical baseline (2+ years ago)
- Recent developments (6-24 months)
- Current state (last 6 months)
- Emerging signals (preprints, announcements)
Confidence Cascading
Layer confidence based on source quality:
witnessed:code-analysis -> conf:0.95
witnessed:doc-extraction -> conf:0.90
reported:paper-claims -> conf:0.85
reported:blog-posts -> conf:0.70
inferred:trend-analysis -> conf:0.65
Common Anti-Patterns
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Single-Source Reliance | Biased or incomplete view | Always cross-reference 3+ sources |
| Recency Blindness | Outdated recommendations | Check last-commit/publish dates prominently |
| Claim Inflation | Overstating source confidence | Apply strict ceiling discipline |
| Feature List Syndrome | Listing without analysis | Always include strategic implications |
| Extraction Without Synthesis | Raw data dumps | Require actionable conclusions |
| Scope Creep | Unbounded reconnaissance | Time-box and scope strictly upfront |
Practical Guidelines
Full vs. Quick Reconnaissance
| Aspect | Quick Recon (2-4h) | Full Recon (1-2 days) |
|---|---|---|
| Sources | 3-5 primary | 10-20 comprehensive |
| Depth | Surface extraction | Deep analysis |
| Output | Single summary | Full package |
| Use When | Time-sensitive decisions | Strategic planning |
Quality Checkpoints
- Every claim has source link
- Confidence ceilings applied consistently
- Date stamps on all documents
- Recommendations are actionable
- Gaps explicitly identified
Trade-offs
- Breadth vs. Depth: Quick decisions need breadth; implementation needs depth
- Speed vs. Accuracy: Higher confidence requires more cross-referencing
- Comprehensiveness vs. Actionability: Too much data obscures recommendations
TIER 3: INTEGRATION SECTIONS
Cross-Skill Coordination
Upstream Skills (provide input)
| Skill | When Used Before | What It Provides |
|---|---|---|
| intent-analyzer | Always | Clarified objectives and constraints |
| prompt-architect | Complex requests | Optimized reconnaissance brief |
Downstream Skills (use output)
| Skill | When Used After | What It Does |
|---|---|---|
| literature-synthesis | Multi-paper recon | Synthesizes extracted papers |
| research-driven-planning | Tech evaluation | Plans implementation from findings |
| decision-framework | Strategic recon | Structures decision from intelligence |
Parallel Skills (work together)
| Skill | When Co-invoked | Coordination |
|---|---|---|
| code-extractor | Repository recon | Parallel extraction tasks |
| web-researcher | Market recon | Parallel web searches |
MCP Requirements
Required MCPs
| MCP | Purpose | Why Needed |
|---|---|---|
| memory-mcp | Store findings | Cross-session retrieval of reconnaissance |
| sequential-thinking | Complex analysis | Multi-step reasoning chains |
Optional MCPs
| MCP | Purpose | When to Enable |
|---|---|---|
| playwright | Web scraping | Product page extraction |
| github-mcp | Repo analysis | Deep repository intelligence |
MCP Tagging Protocol
{
"WHO": "reconnaissance-{session_id}",
"WHEN": "ISO8601 timestamp",
"PROJECT": "target-domain",
"WHY": "reconnaissance|competitive-analysis|tech-eval"
}
Input/Output Contracts
inputs:
target: string # Required: domain, technology, or competitor name
questions: list[string] # Required: specific questions to answer
depth: enum[quick, full] # Optional: default 'quick'
sources: list[string] # Optional: pre-specified sources to include
constraints:
time_limit: string # Optional: e.g., "4 hours"
exclusions: list[string] # Optional: sources to skip
outputs:
reconnaissance_package:
manifest: file # Source inventory with metadata
extractions: list[file] # Per-source structured extractions
analysis: file # COMPREHENSIVE-ANALYSIS.md
comparison: file # COMPARISON-CHART.md (if multi-source)
recommendations: file # Actionable next steps
executive_summary: string # Brief for stakeholders
memory_entries:
key_findings: list # Stored to memory-mcp
source_index: object # For future retrieval
metadata:
confidence: float # Overall confidence (0.0-1.0)
ceiling: string # Confidence ceiling type
duration: string # Actual time spent
gaps: list[string] # Identified information gaps
Recursive Improvement
Self-Application
reconnaissance.improve(reconnaissance)
Improvement Dimensions:
- Extraction template coverage
- Source discovery efficiency
- Synthesis quality metrics
- Recommendation actionability
Eval Harness Integration
- Track source discovery rate
- Measure extraction completeness
- Assess recommendation adoption
- Monitor confidence calibration
Memory Namespace
reconnaissance:{project}:{date}:{type}
TIER 4: CLOSURE SECTIONS
Examples
Example 1: Technology Evaluation
User: "Should we use FATE-LLM for our federated learning project?"
Task("reconnaissance", {
target: "FATE-LLM",
questions: [
"What are the core capabilities?",
"How active is development?",
"What are the integration requirements?",
"Are there production deployments?"
],
depth: "full"
})
Output:
- FATE-LLM MANIFEST.md with architecture analysis
- Dependency and compatibility matrix
- Community health metrics
- Recommendation: Suitable for X, concerns about Y
Example 2: Competitive Analysis
User: "Map the edge inference market for our fog-compute pitch"
Task("reconnaissance", {
target: "edge inference platforms",
questions: [
"Who are the major players?",
"What are the pricing models?",
"Where are the gaps we can fill?"
],
depth: "quick"
})
Output:
- Player inventory with positioning
- Feature comparison matrix
- Gap analysis with opportunity sizing
- Pitch angle recommendations
Example 3: Paper Extraction
User: "Extract implementation patterns from these 3 federated learning papers"
Task("reconnaissance", {
target: "federated learning papers",
sources: ["arXiv:2504.00407", "arXiv:2411.16086", "arXiv:2503.18986"],
questions: [
"What are the key algorithms?",
"Is code available?",
"What are the performance claims?"
],
depth: "full"
})
Output:
- Per-paper extraction documents
- Algorithm comparison chart
- Code availability matrix
- Integration roadmap for fog-compute
Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
| No repositories found | Narrow search terms | Broaden keywords, try alternative names |
| Conflicting claims | Multiple sources disagree | Note conflict explicitly, investigate root cause |
| Stale sources only | Domain is new or niche | Expand to preprints, conference proceedings |
| Extraction too shallow | Time constraints | Prioritize highest-impact sources |
| Recommendations unclear | Insufficient synthesis | Re-run synthesis phase with explicit decision criteria |
| Memory storage fails | MCP not configured | Verify memory-mcp connection, use fallback file storage |
Conclusion
Reconnaissance transforms raw information into strategic advantage through systematic discovery, structured extraction, and evidence-grounded synthesis. The skill's value lies not in information gathering alone, but in producing decision-ready intelligence with explicit confidence bounds.
Key success factors:
- Scope strictly before starting
- Diversify sources systematically
- Apply consistent extraction templates
- Synthesize with confidence ceilings
- Deliver actionable recommendations
Completion Verification
- Scope defined with clear questions and constraints
- Sources diversified across 3+ types (repos, papers, products)
- Structured extractions completed for all sources
- Cross-reference matrix identifies conflicts and gaps
- Synthesis includes confidence ceilings on all claims
- Recommendations are specific and actionable
- Memory-mcp entries created for key findings
- Executive summary provides decision-ready brief
- All outputs date-stamped
Confidence: 0.85 (ceiling: research 0.85) - skill definition based on observed reconnaissance patterns and skill-forge requirements.