| name | consider |
| description | Selects and applies mental models for structured problem analysis. Triggers when user asks "why", "what if", "how should we", needs systematic problem-solving, or mentions analyzing a situation. MUST BE USED when comparing options, making decisions, or evaluating trade-offs. |
The command will analyze, gather required information, then apply the right model(s).
| Model | Best For | Core Question |
|---|---|---|
| 5-Whys | Root cause analysis | "Why did this happen?" (iterate 5x) |
| 10-10-10 | Decisions with emotional bias | "How will I feel in 10 min/months/years?" |
| Eisenhower | Task prioritization | "Is this urgent AND important?" |
| First Principles | Challenging assumptions | "What is fundamentally true?" |
| Inversion | Risk prevention | "What would guarantee failure?" |
| Occam's Razor | Competing explanations | "Which requires fewest assumptions?" |
| One Thing | Finding leverage | "What makes everything else easier?" |
| Opportunity Cost | Tradeoff analysis | "What am I giving up?" |
| Pareto | Impact prioritization | "Which 20% drives 80% of results?" |
| Second-Order | Consequence analysis | "And then what happens?" |
| SWOT | Strategic position | "Strengths/Weaknesses/Opportunities/Threats?" |
| Via Negativa | Simplification | "What should I remove?" |
| Six Hats | Parallel perspectives | "What are all the angles?" |
| TOC | Systemic root cause + conflict resolution | "What constraint is blocking the system?" |
Full model templates: See references/ directory for complete execution frameworks.
| Need Type | Source | Tool/Method |
|---|---|---|
| Historical context | Local | Read (logs, docs, git history) |
| Codebase patterns | Local | Task(Explore) with constraints |
| Current metrics | Local | Read analytics, logs |
| Market data | Web | Task + WebSearch |
| Competitor info | Web | Task + WebSearch |
| Industry benchmarks | Web | Task + WebSearch |
| User preferences | User | AskUserQuestion |
| Success criteria | User | AskUserQuestion |
| Constraints/limits | User | AskUserQuestion |
| Technical specs | Local/User | Read docs OR AskUserQuestion |
When information gathering is needed, use Task tool with structured prompts for token efficiency.
@type: AnalyzeAction
about: "[specific question about codebase/docs]"
@return Answer:
- text: string (direct answer, max 200 chars)
- evidence: string[] (file:line references, max 5)
- confidence: string (high|medium|low)
@constraints:
maxTokens: 2000
format: JSON object
Return ONLY the specified structure. No preamble or explanations.
Use subagent_type: Explore with thoroughness based on scope:
- Single file/function:
quick - Module/feature:
medium - Cross-cutting concern:
thorough
@type: AnalyzeAction
query: "[specific research query]"
@return ItemList (max 5 items):
- position: integer
- name: string (source name)
- url: string (if available)
- summary: string (max 150 chars, key finding)
- relevance: string (high|medium|low)
@constraints:
maxTokens: 3000
format: markdown table
Return ONLY the specified structure. No commentary.
Use WebSearch or WebFetch for:
- Current market conditions
- Competitor analysis
- Industry benchmarks
- Recent trends or news
Invoke multiple Task calls in a single message:
- Codebase analysis (Task/Explore)
- Web research (Task with WebSearch)
- These run in parallel, reducing latency
Example parallel invocation:
Task 1: Explore codebase for error handling patterns
Task 2: WebSearch for "industry error handling best practices 2024"
Both return focused, structured responses within token budgets.
Diagnostic Chain: 5-Whys → First Principles → Inversion (find root → verify assumptions → prevent recurrence)
Decision Chain: Opportunity Cost → Second-Order → 10-10-10 (what you give up → consequences → time horizons)
Priority Chain: Pareto → One Thing → Via Negativa (vital few → single leverage → remove rest)
Strategic Chain: SWOT → Inversion → Second-Order (position → failure modes → consequences)
High-stakes decision: 10-10-10 + Inversion + Second-Order Strategic pivot: SWOT + First Principles + Opportunity Cost Simplification: Via Negativa + Pareto + One Thing
At analysis start, if MCP memory tools are available:
Use mcp__memory__search_nodes to find relevant prior analyses:
search_nodes("{key problem terms}")
Look for:
- Similar Problem entities (entityType: "Problem")
- Related RootCause entities (entityType: "RootCause")
- Applicable Insight entities (entityType: "Insight")
If matches found, use mcp__memory__open_nodes to get details:
open_nodes(["problem-similar-issue", "insight-relevant-finding"])
Present to user:
## Prior Context (from memory)
**Similar problems analyzed:**
- [problem name]: [key observations]
**Relevant insights:**
- [insight]: [content, outcome]
**Recurring root causes in this area:**
- [root cause]: [occurrence count]
Use prior context to:
- Suggest models that worked well before
- Highlight root causes that recur
- Avoid repeating failed approaches
- Build on validated insights
Skip memory recall if:
- MCP memory tools not available
- User requests fresh analysis
- No relevant matches found
Available locally?
- Conversation history
- Codebase/project files
- User-provided documents
Requires web research?
- Market/competitor data
- Industry benchmarks
- Current trends
Must ask user?
- Personal values/priorities
- Constraints not documented
- Success criteria
Execute information gathering based on assessment:
- Local: Use Read or Task(Explore) with token constraints
- Web: Use Task with WebSearch, structured return format
- User: Use AskUserQuestion with specific, focused questions
Parallel execution: If needs are independent, invoke multiple Task calls in single message.
Token budget guidance:
- Simple lookup: 1000-2000 tokens
- Moderate analysis: 2000-3000 tokens
- Complex research: 3000-5000 tokens
With gathered context:
- Load full model template from
references/[model-name].md - Apply model systematically using template structure
- For serial chains: complete each model before starting next
- For parallel triangulation: apply all models, then compare
Deliver:
- Key Insight: Single most important finding (1-2 sentences)
- Recommended Action: Specific next step
- Confidence Level: High/Medium/Low with reasoning
- Information Gaps: What couldn't be determined (if any)
Before proceeding to execution, verify:
- Problem type confirmed with user
- Model selection appropriate for type + focus
- Information needs classified (local/web/user)
- Required information gathered with structured responses
- Token budgets respected in subagent calls
- No open-ended research (all queries focused)
Red flags requiring user clarification:
- Problem fits multiple types equally
- Critical information unavailable
- High emotional loading detected
- Conflicting constraints identified
Analysis is successful when:
- Problem correctly classified and confirmed
- Required information gathered efficiently (minimal tokens)
- Model(s) applied with full rigor using templates
- Insight is specific and actionable
- Confidence level justified
- User can take immediate action on recommendation
Classification Output Format
For the problem classification section (step 1), use TOON structured format:
@type: AnalyzeAction
name: problem-classification
object: [problem statement text]
actionStatus: CompletedActionStatus
classification:
primaryType: [DIAGNOSIS|DECISION|PRIORITIZATION|INNOVATION|RISK|FOCUS|OPTIMIZATION|STRATEGY]
temporalFocus: [PAST|PRESENT|FUTURE]
complexity: [SIMPLE|COMPLICATED|COMPLEX]
emotionalLoading: [HIGH|LOW]
signals[N]: [key,signal,words]
Note: Keep all reasoning, framework selection, model execution, and synthesis as markdown prose. Only use TOON for the structured classification output at the beginning of the analysis.
Model execution templates (read when applying specific model):
references/5-whys.md- Root cause drillingreferences/10-10-10.md- Time horizon analysisreferences/eisenhower.md- Urgency/importance matrixreferences/first-principles.md- Assumption challengingreferences/inversion.md- Failure mode analysisreferences/occams-razor.md- Simplest explanationreferences/one-thing.md- Leverage identificationreferences/opportunity-cost.md- Tradeoff analysisreferences/pareto.md- 80/20 analysisreferences/second-order.md- Consequence chainsreferences/swot.md- Strategic positionreferences/via-negativa.md- Improvement by subtractionreferences/six-hats.md- Parallel perspective explorationreferences/toc.md- Theory of Constraints logical thinking
For memory schema details, see mcp/memory-schema.md.