Claude Code Plugins

Community-maintained marketplace

Feedback

implementation-planner

@l-lin/dotfiles
14
0

Break down technical implementation tasks into actionable steps with codebase analysis. Use when the user says "breakdown the task", "plan the implementation" or when you need to decompose complex features, refactorings, or bug fixes into a structured implementation plan.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name implementation-planner
description Break down technical implementation tasks into actionable steps with codebase analysis. Use when the user says "breakdown the task", "plan the implementation" or when you need to decompose complex features, refactorings, or bug fixes into a structured implementation plan.

Task Breakdown Protocol

You are invoked to break down a technical implementation task into concrete, actionable steps.

Analysis Phase

Step 1: Context Gathering

  1. Read all mentioned files immediately and FULLY:

    • Research documents, tickets, related plans
    • CRITICAL: Use Read tool WITHOUT limit/offset to read entire files
    • NEVER read files partially
  2. Spawn parallel research tasks to gather context:

    • Use @codebase-locator to find all files related to the task
    • Use @codebase-analyzer to understand current implementation
    • Use @git-investigator to understand why current implementation exists
    • Use @codebase-pattern-finder to find similar features to model after
  3. Wait for ALL sub-tasks to complete before proceeding

  4. Read all files identified by research tasks:

    • Read them FULLY into context
    • Cross-reference with requirements
    • Identify discrepancies
  5. Analyze and verify:

    • Note assumptions requiring verification
    • Determine true scope from codebase reality
    • Only ask questions that code investigation cannot answer

Breakdown Rules

Structure (MUST)

  • Each step must be specific and verifiable
  • Include file paths and function names where applicable
  • Order steps by dependency (prerequisites first)
  • Separate implementation from testing

Required Steps (MUST include)

  1. Test setup - Write test cases first (TDD approach)
  2. Implementation steps - Ordered by dependency
  3. Verification steps - Split into:
    • Automated verification: Tests, builds, linting (can be run by agents)
    • Manual verification: UI testing, performance checks (requires human)
  4. Cleanup step - Remove dead code, update docs if needed

Anti-patterns (AVOID)

  • Vague steps like "Update the logic"
  • Steps without file/function references
  • Missing verification steps
  • Skipping test-first approach

Output Format

Return a numbered list of tasks in this format:

1. Write tests for [feature] in [file:line]
   - GIVEN: [scenario]
   - WHEN: [action]
   - THEN: [expected outcome]

2. Implement [function] in [file:line]
   - [specific implementation detail]

3. Update [related code] in [file:line]
   - [what needs to change and why]

4. Run automated verification
   - [ ] Tests pass: `npm test` or `make test`
   - [ ] Build succeeds: `npm run build` or `make build`
   - [ ] Linting passes: `npm run lint` or `make lint`
   - [ ] Type checking passes: `npm run typecheck`

5. Run manual verification
   - [ ] [Specific UI feature works as expected]
   - [ ] [Performance acceptable under load]
   - [ ] [Edge case handling verified]
   - [ ] [No regressions in related features]

6. Cleanup (if needed)
   - Remove [deprecated code] from [file:line]
   - Update [documentation] in [file:line]

Verification Checklist

Each task should pass this check:

  • Can I execute this step without ambiguity?
  • Do I know which file and function to modify?
  • Is the expected outcome clear?
  • Can I verify this step independently?

Examples

Good Breakdown

1. Write test for user authentication in tests/auth.test.ts:45
   - GIVEN: User with valid credentials
   - WHEN: login() is called
   - THEN: Returns auth token and sets session cookie

2. Implement login() function in src/auth.ts:120
   - Hash password with bcrypt
   - Query users table for matching email
   - Generate JWT token on success
   - Set httpOnly session cookie

3. Update API route in src/api/auth.ts:30
   - Add POST /api/login endpoint
   - Call login() with request body
   - Return 200 with token or 401 on failure

4. Run automated verification
   - [ ] Unit tests pass: `npm test auth`
   - [ ] Integration tests pass: `npm test api/auth`
   - [ ] Type checking passes: `npm run typecheck`
   - [ ] No linting errors: `npm run lint`

5. Run manual verification
   - [ ] Login form works in browser
   - [ ] Session persists across page refresh
   - [ ] Invalid credentials show error message
   - [ ] Rate limiting prevents brute force

Bad Breakdown

1. Fix the authentication
2. Make sure it works
3. Test it

Problems: No file references, vague steps, no verification criteria

Sub-task Spawning Best Practices

When spawning research sub-tasks:

  1. Spawn multiple tasks in parallel for efficiency
  2. Each task should be focused on a specific area
  3. Provide detailed instructions including:
    • Exactly what to search for
    • Which directories to focus on
    • What information to extract
    • Expected output format
  4. Be EXTREMELY specific about directories:
    • Include full path context in prompts
  5. Request specific file:line references in responses
  6. Wait for all tasks to complete before synthesizing
  7. Verify sub-task results:
    • If sub-task returns unexpected results, spawn follow-up tasks
    • Cross-check findings against actual codebase
    • Don't accept incorrect results

Example spawning multiple tasks concurrently:

# Spawn these in parallel:
tasks = [
    Task("Research database schema", db_research_prompt),
    Task("Find API patterns", api_research_prompt),
    Task("Investigate UI components", ui_research_prompt),
    Task("Check test patterns", test_research_prompt)
]

Common Patterns

For Database Changes:

  1. Start with schema/migration
  2. Add store methods
  3. Update business logic
  4. Expose via API
  5. Update clients

For New Features:

  1. Research existing patterns first
  2. Start with data model
  3. Build backend logic
  4. Add API endpoints
  5. Implement UI last

For Refactoring:

  1. Document current behavior
  2. Plan incremental changes
  3. Maintain backwards compatibility
  4. Include migration strategy

Notes

  • Use existing codebase patterns (check before introducing new dependencies)
  • Keep scope focused (only touch relevant code)
  • Consider error handling in each step
  • Mark which steps require manual verification beyond tests
  • No open questions in final breakdown - resolve all uncertainties first