Claude Code Plugins

Community-maintained marketplace

Feedback

Use when design is complete and you need detailed implementation tasks for engineers with zero codebase context - creates comprehensive implementation plans with exact file paths, complete code examples, and verification steps assuming engineer has minimal domain knowledge

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name writing-plans
description Use when design is complete and you need detailed implementation tasks for engineers with zero codebase context - creates comprehensive implementation plans with exact file paths, complete code examples, and verification steps assuming engineer has minimal domain knowledge
allowed-tools Read, Write, AskUserQuestion, Glob, Grep, mcp__context7__resolve-library-id, mcp__context7__get-library-docs

Writing Plans

External Documentation Search (First Step)

Before researching the codebase, offer to search external documentation for the latest API references using Context7 MCP.

Step 0: Check Context7 Availability

First, check if Context7 MCP tools are available by looking for mcp__context7__ tools in your available tools list.

If Context7 is NOT available:

  • Skip this entire section silently
  • Proceed directly to Python Project Detection
  • Do NOT ask the user about documentation search

If Context7 IS available:

  • Continue with Step 1

Step 1: Ask About Documentation

Use AskUserQuestion:

Question: "Would you like me to search external documentation before planning?"
Header: "Docs"
multiSelect: false
Options:
- Yes, let me specify: I'll enter which libraries/frameworks to search
- Auto-detect from task: Analyze the task and search relevant libraries automatically
- No, skip: Proceed with planning without external docs

Step 2: Get Library Names

If user selected "Yes, let me specify":

  • Ask follow-up: "Which libraries/frameworks should I search? (comma-separated, e.g., 'fastapi, pydantic, sqlalchemy')"

If user selected "Auto-detect from task":

  • Extract technology keywords from the task description
  • Present detected libraries for confirmation: "I detected these technologies: [list]. Should I search docs for these?"

If user selected "No, skip":

  • Proceed directly to Python Project Detection

Step 3: Fetch Documentation

For each library the user confirms:

  1. Resolve library ID:

    mcp__context7__resolve-library-id(libraryName: "library-name")
    

    Select the most relevant match based on description and documentation coverage.

  2. Fetch relevant docs:

    mcp__context7__get-library-docs(
      context7CompatibleLibraryID: "/org/project",
      topic: "[relevant topic from task]",
      mode: "code"
    )
    

    Use mode: "info" for architectural/conceptual questions.

  3. Use for context only: Keep documentation in working memory to inform plan tasks. Do NOT include raw docs in the plan document.

If tool call fails: Inform user that Context7 couldn't fetch docs for that library and continue with available information.

Step 4: Continue to Planning

After documentation is loaded (or skipped), proceed to Python Project Detection with documentation context available.


Overview

Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.

Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.

Announce at start: "I'm using the writing-plans skill to create the implementation plan."

Python Project Detection

Before writing a plan, detect if this is a Python project and what framework it uses:

Detection signals:

  • pyproject.toml or setup.py → Python project
  • fastapi in dependencies → FastAPI project
  • django in dependencies → Django project
  • asyncio imports or async def → Async code
  • .python-version or uv.lock → Uses uv package manager

When Python detected:

  1. Use Skill tool to load python:python-testing
  2. Use Skill tool to load python:python-project
  3. Use patterns from loaded skills in plan tasks
  4. Use uv run prefix for all Python commands

When async/performance code detected:

  • Use Skill tool to load python:python-performance

When FastAPI/Django detected:

  • Dispatch python:python-expert agent for framework-specific patterns

Optional: Track Plan Writing Phases

For complex plans (5+ tasks), use TodoWrite to track progress:

  • Research existing architecture
  • Define high-level approach
  • Break down into tasks
  • Add code examples
  • Generate diagrams (if applicable)
  • Execution handoff

Context: This should be run in a dedicated worktree (created by brainstorming skill).

Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md

Bite-Sized Task Granularity

Each step is one action (2-5 minutes):

  • "Write the failing test" - step
  • "Run it to make sure it fails" - step
  • "Implement the minimal code to make the test pass" - step
  • "Run the tests and make sure they pass" - step
  • "Commit" - step

Task Complexity Classification

Every task MUST include a complexity tag. This enables efficient execution.

Complexity Examples TDD? Code Review?
TRIVIAL Delete file, rename, typo fix, config update No Parent verifies git diff
SIMPLE Small refactor, single-file change, add comment If code changes Haiku (optional)
MODERATE Feature implementation, bug fix with tests Yes Sonnet
COMPLEX Multi-file feature, architectural change Yes Opus

Classification heuristics:

  • TRIVIAL: No new logic, no tests needed, <10 lines changed
  • SIMPLE: Minor logic changes, one test file, <50 lines changed
  • MODERATE: New functionality, multiple test cases, 50-200 lines
  • COMPLEX: Multiple files, architectural decisions, >200 lines or high risk

Plan Document Header

Every plan MUST start with this header:

# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use workflow:executing-plans to implement this plan task-by-task.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

---

Task Structure

### Task N: [Component Name]

**Complexity:** [TRIVIAL | SIMPLE | MODERATE | COMPLEX]

**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`

**Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected

Step 2: Run test to verify it fails

Run: uv run pytest tests/path/test.py::test_name -v Expected: FAIL with "function not defined"

Step 3: Write minimal implementation

def function(input):
    return expected

Step 4: Run test to verify it passes

Run: uv run pytest tests/path/test.py::test_name -v Expected: PASS

Step 5: Commit

git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"

## Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits

## Python-Specific Patterns

When Python detected, load patterns from the python plugin instead of duplicating them here.

**Step 1: Load relevant skill**

Use Skill tool: python:python-testing


**Step 2: Copy patterns into plan tasks**

- Fixtures from skill → conftest.py setup task
- Parameterized tests from skill → test task examples
- Mocking patterns from skill → integration test examples

**For async/performance code detected:**

Use Skill tool: python:python-performance


**For FastAPI/Django detected:**

Task tool (python:python-expert): prompt: "Provide [framework] test patterns for [feature]"


## Diagram Generation Phase

Diagrams help Claude understand complex plans during execution. They're not just for humans - they serve as a reference that helps Claude maintain context during multi-step implementations.

### Step 1: Ask About Diagrams

Use AskUserQuestion:

Question: "Should I generate diagrams to help with plan execution?" Header: "Diagrams" multiSelect: false Options:

  • Auto-detect: Let Claude decide what diagrams (if any) would help execution
  • Task Dependencies: Show task order, parallelization, and blockers
  • Architecture: Show system components, layers, and boundaries
  • Sequence: Show API flows, service calls, and message passing
  • State: Show status transitions and lifecycle stages
  • Data Model: Show database schema and entity relationships
  • No diagrams: Skip diagram generation entirely

**Default to "Auto-detect"** for complex plans. Only skip diagrams for trivial changes.

### Step 2: Generate Diagrams

If user selects "Auto-detect" or specific types, dispatch diagram-generator agent:

Task tool (doc:diagram-generator): description: "Generate Mermaid diagrams for plan" prompt: | See template at plugins/methodology/workflow/templates/diagram-prompt.md

MODE: [auto-detect | specific]
DIAGRAM_TYPES: [user's selection or "auto"]
PLAN_CONTENT: [full plan text]

**For auto-detect mode**, the agent will:
1. Analyze plan complexity and structure
2. Decide IF any diagrams would help Claude execute
3. Select the most useful diagram type(s)
4. Skip if plan is simple enough that diagrams add no value

### Step 3: Insert Diagrams

Add `## Diagrams` section to the plan document after the header block, before first task:

```markdown
# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL...

**Goal:** ...
**Architecture:** ...
**Tech Stack:** ...

---

## Diagrams

[Insert agent output here]

---

### Task 1: ...

When to Skip the Question Entirely

Don't even ask about diagrams when:

  • Plan has < 3 tasks with purely linear sequence
  • Single-file changes (bug fix, refactor)
  • Configuration-only changes

In these cases, just skip to execution handoff.

Execution Handoff

After saving the plan, use the AskUserQuestion tool (do NOT output as plain text):

Question: "Plan saved. How would you like to execute it?" Header: "Execute" Options:

  • Subagent-Driven: Execute in this session with fresh subagent per task, code review between tasks
  • Parallel Session: Open new session in worktree, batch execution with checkpoints
  • Skip: I'll execute manually later multiSelect: false

If Subagent-Driven chosen:

  • REQUIRED SUB-SKILL: Use Skill tool: workflow:subagent-dev
  • Stay in this session
  • Fresh subagent per task + code review between tasks

If Parallel Session chosen:

  • Guide them to open new session in worktree
  • REQUIRED SUB-SKILL: Use Skill tool: workflow:executing-plans

If Skip chosen:

  • Confirm the plan file location
  • End the planning workflow

Workflow Integration

Python Development Skills

When writing Python plans, integrate these python plugin skills:

Skill When to Reference What It Provides
python:python-testing All Python test code Fixtures, mocking, parameterized tests, markers
python:python-project Python commands, dependencies, packaging uv run, uv add, pyproject.toml, publishing
python:python-performance Async or performance code asyncio, profiling, caching, optimization patterns

Agent Dispatch

For complex Python plans, dispatch specialized agents:

Task tool (python:python-expert):
  description: "Get FastAPI/Django patterns for [feature]"
  prompt: "Analyze [feature requirements] and provide:
    1. Framework-specific implementation pattern
    2. Test fixtures and patterns
    3. Common pitfalls to avoid"

Plan Header with Python Context

For Python projects, enhance the header:

# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use workflow:executing-plans to implement this plan task-by-task.
> **Python Skills:** Reference python:python-testing for tests, python:python-project for uv commands.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** Python 3.12+, pytest, [framework if applicable]

**Commands:** All Python commands use `uv run` prefix

---