Claude Code Plugins

Community-maintained marketplace

Feedback

Deep analysis debugging mode for complex issues. Activates methodical investigation protocol with evidence gathering, hypothesis testing, and rigorous verification. Use when standard troubleshooting fails or when issues require systematic root cause analysis.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name debug-like-expert
description Deep analysis debugging mode for complex issues. Activates methodical investigation protocol with evidence gathering, hypothesis testing, and rigorous verification. Use when standard troubleshooting fails or when issues require systematic root cause analysis.
Deep analysis debugging mode for complex issues. This skill activates methodical investigation protocols with evidence gathering, hypothesis testing, and rigorous verification when standard troubleshooting has failed.

The skill emphasizes treating code you wrote with MORE skepticism than unfamiliar code, as cognitive biases about "how it should work" can blind you to actual implementation errors. Use scientific method to systematically identify root causes rather than applying quick fixes.

This skill activates when standard troubleshooting has failed. The issue requires methodical investigation, not quick fixes. You are entering the mindset of a senior engineer who debugs with scientific rigor.

Important: If you wrote or modified any of the code being debugged, you have cognitive biases about how it works. Your mental model of "how it should work" may be wrong. Treat code you wrote with MORE skepticism than unfamiliar code - you're blind to your own assumptions.

**VERIFY, DON'T ASSUME.** Every hypothesis must be tested. Every "fix" must be validated. No solutions without evidence.

ESPECIALLY: Code you designed or implemented is guilty until proven innocent. Your intent doesn't matter - only the code's actual behavior matters. Question your own design decisions as rigorously as you'd question anyone else's.

Before proposing any solution:

A. Document Current State

  • What is the EXACT error message or unexpected behavior?
  • What are the EXACT steps to reproduce?
  • What is the ACTUAL output vs EXPECTED output?
  • When did this start working incorrectly (if known)?

B. Map the System

  • Trace the execution path from entry point to failure point
  • Identify all components involved
  • Read relevant source files completely, not just scanning
  • Note dependencies, imports, configurations affecting this area

C. Gather External Knowledge (when needed)

  • Use MCP servers for API documentation, library details, or domain knowledge
  • Use web search for error messages, framework-specific behaviors, or recent changes
  • Check official docs for intended behavior vs what you observe
  • Look for known issues, breaking changes, or version-specific quirks

See references/when-to-research.md for detailed guidance on research strategy.

A. Form Hypotheses

Based on evidence, list possible causes:

  1. [Hypothesis 1] - because [specific evidence]
  2. [Hypothesis 2] - because [specific evidence]
  3. [Hypothesis 3] - because [specific evidence]

B. Test Each Hypothesis

For each hypothesis:

  • What would prove this true?
  • What would prove this false?
  • Design a minimal test
  • Execute and document results

See references/hypothesis-testing.md for scientific method application.

C. Eliminate or Confirm

Don't move forward until you can answer:

  • Which hypothesis is supported by evidence?
  • What evidence contradicts other hypotheses?
  • What additional information is needed?

Only after confirming root cause:

A. Design Solution

  • What is the MINIMAL change that addresses the root cause?
  • What are potential side effects?
  • What could this break?

B. Implement with Verification

  • Make the change
  • Add logging/debugging output if needed to verify behavior
  • Document why this change addresses the root cause

C. Test Thoroughly

  • Does the original issue still occur?
  • Do the reproduction steps now work?
  • Run relevant tests if they exist
  • Check for regressions in related functionality

See references/verification-patterns.md for comprehensive verification approaches.

  1. NO DRIVE-BY FIXES: If you can't explain WHY a change works, don't make it
  2. VERIFY EVERYTHING: Test your assumptions. Read the actual code. Check the actual behavior
  3. USE ALL TOOLS:
    • MCP servers for external knowledge
    • Web search for error messages, docs, known issues
    • Extended thinking ("think deeply") for complex reasoning
    • File reading for complete context
  4. THINK OUT LOUD: Document your reasoning at each step
  5. ONE VARIABLE: Change one thing at a time, verify, then proceed
  6. COMPLETE READS: Don't skim code. Read entire relevant files
  7. CHASE DEPENDENCIES: If the issue involves libraries, configs, or external systems, investigate those too
  8. QUESTION PREVIOUS WORK: Maybe the earlier "fix" was wrong. Re-examine with fresh eyes

Answer these questions:

  • Do you understand WHY the issue occurred?
  • Have you verified the fix actually works?
  • Have you tested the original reproduction steps?
  • Have you checked for side effects?
  • Can you explain the solution to someone else?
  • Would this fix survive code review?

If you can't answer "yes" to all of these, keep investigating.

CRITICAL: Do NOT mark debugging tasks as complete until this checklist passes.

## Issue: [Problem Description]

### Evidence
[What you observed - exact errors, behaviors, outputs]

### Investigation
[What you checked, what you found, what you ruled out]

### Root Cause
[The actual underlying problem with evidence]

### Solution
[What you changed and WHY it addresses the root cause]

### Verification
[How you confirmed this works and doesn't break anything else]

For deeper topics, see reference files:

Debugging mindset: references/debugging-mindset.md

  • First principles thinking applied to debugging
  • Cognitive biases that lead to bad fixes
  • The discipline of systematic investigation
  • When to stop and restart with fresh assumptions

Investigation techniques: references/investigation-techniques.md

  • Binary search / divide and conquer
  • Rubber duck debugging
  • Minimal reproduction
  • Working backwards from desired state
  • Adding observability before changing code

Hypothesis testing: references/hypothesis-testing.md

  • Forming falsifiable hypotheses
  • Designing experiments that prove/disprove
  • What makes evidence strong vs weak
  • Recovering from wrong hypotheses gracefully

Verification patterns: references/verification-patterns.md

  • Definition of "verified" (not just "it ran")
  • Testing reproduction steps
  • Regression testing adjacent functionality
  • When to write tests before fixing

Research strategy: references/when-to-research.md

  • Signals that you need external knowledge
  • What to search for vs what to reason about
  • Balancing research time vs experimentation