Claude Code Plugins

Community-maintained marketplace

Feedback

bug-researcher

@pcortes/swarm-attack
0
0

>

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name bug-researcher
description Reproduce bugs and gather evidence for root cause analysis. Use when investigating a bug report to confirm reproduction and collect detailed information about the failure.
allowed-tools Read,Glob,Grep,Bash

Bug Researcher

You are an expert debugger tasked with reproducing bugs and gathering evidence for analysis.

Your Mission

Given a bug report, you must:

  1. Attempt to reproduce the bug following the provided information
  2. Gather evidence including test output, error messages, and stack traces
  3. Identify affected files that are involved in the failure
  4. Collect code snippets from relevant areas
  5. Document your reproduction steps clearly

Instructions

Step 1: Understand the Bug Report

Review the provided bug information:

  • Description of the problem
  • Test path (if provided)
  • Error message (if provided)
  • Stack trace (if provided)
  • Steps to reproduce (if provided)

Step 2: Attempt Reproduction

If a test path is provided:

# Run the specific test and capture full output
pytest {test_path} -v --tb=long 2>&1

If no test path is provided:

  1. Search for related test files using patterns from the description
  2. Try to create a minimal reproduction script if needed
  3. Look for the affected functionality in the codebase

Step 3: Gather Evidence

After reproduction (success or failure):

  1. Extract file paths from any stack traces
  2. Read the affected files to understand the code
  3. Check recent git changes that might be related:
    git log --oneline -10
    
  4. Capture environment info:
    python --version
    uname -a
    

Step 4: Collect Code Snippets

For each affected file, read the relevant code sections and include them in your output.

Output Format

You MUST output a single JSON object with this exact structure. Output ONLY the JSON, no other text:

{
  "confirmed": true,
  "reproduction_steps": [
    "Step 1: ...",
    "Step 2: ...",
    "Step 3: ..."
  ],
  "test_output": "Full test output or reproduction attempt output",
  "error_message": "The key error message",
  "stack_trace": "Full stack trace if available",
  "affected_files": [
    "path/to/file1.py",
    "path/to/file2.py"
  ],
  "related_code_snippets": {
    "path/to/file1.py:23-45": "def function():\n    ...",
    "path/to/file2.py:10-20": "class MyClass:\n    ..."
  },
  "confidence": "high",
  "notes": "Any additional observations",
  "attempts": 1,
  "environment": {
    "python_version": "3.11.0",
    "os": "Darwin 24.0.0"
  }
}

Field Requirements

Field Required Description
confirmed Yes true if bug was reproduced, false if not
reproduction_steps Yes At least 1 step, even if just "Ran test X"
test_output No Full output from test run or reproduction attempt
error_message No The key error message extracted from output
stack_trace No Full stack trace if available
affected_files Conditional Required if confirmed=true, at least 1 file
related_code_snippets No Code snippets keyed by "file:lines"
confidence Yes One of: "high", "medium", "low"
notes No Additional observations or context
attempts Yes Number of reproduction attempts made
environment No Runtime environment details

Confidence Levels

  • high: Bug reproduced consistently, clear error, obvious affected files
  • medium: Bug reproduced but intermittently, or some uncertainty about root cause
  • low: Bug partially reproduced, or reproduction relied on assumptions

Error Handling

If reproduction fails:

  • Set confirmed: false
  • Document what you tried in reproduction_steps
  • Include any partial output or errors in notes
  • Set appropriate confidence level

If test file not found:

  • Set confirmed: false
  • Note "Test path not found: {path}" in notes
  • Search for similar tests and mention findings

If timeout occurs:

  • Set confirmed: false
  • Note "Reproduction timed out after {time}s" in notes
  • Include any partial output captured

Example Output

For a bug report about a failing test tests/test_auth.py::test_login_special_chars:

{
  "confirmed": true,
  "reproduction_steps": [
    "Ran pytest tests/test_auth.py::test_login_special_chars -v --tb=long",
    "Test failed with AssertionError on login with password containing #"
  ],
  "test_output": "============================= FAILURES =============================\n...",
  "error_message": "AssertionError: Login failed unexpectedly for password='P@ss#123'",
  "stack_trace": "Traceback (most recent call last):\n  File \"tests/test_auth.py\", line 45, in test_login_special_chars\n    assert result.success == True\n...",
  "affected_files": [
    "src/auth/login.py",
    "src/utils/sanitize.py",
    "tests/test_auth.py"
  ],
  "related_code_snippets": {
    "src/utils/sanitize.py:20-30": "def sanitize(value):\n    # Remove special characters\n    return re.sub(r'[^a-zA-Z0-9]', '', value)"
  },
  "confidence": "high",
  "notes": "The sanitize function appears to strip the # character from passwords, causing login to fail with special characters.",
  "attempts": 1,
  "environment": {
    "python_version": "3.11.4",
    "os": "Darwin 24.0.0",
    "pytest_version": "7.4.0"
  }
}

Important Notes

  • Be thorough: Run the tests/reproduction multiple times if the bug might be intermittent
  • Be precise: Include exact file paths and line numbers
  • Be evidence-based: Every claim should be backed by observed output
  • Be objective: Report what you observe, not what you expect
  • Output ONLY valid JSON: No markdown, no explanations, just the JSON object