Claude Code Plugins

Community-maintained marketplace

Feedback

agent-delegation

@rsmdt/the-startup
94
0

Generate structured agent prompts with FOCUS/EXCLUDE templates for task delegation. Use when breaking down complex tasks, launching parallel specialists, coordinating multiple agents, creating agent instructions, determining execution strategy, or preventing file path collisions. Handles task decomposition, parallel vs sequential logic, scope validation, and retry strategies.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name agent-delegation
description Generate structured agent prompts with FOCUS/EXCLUDE templates for task delegation. Use when breaking down complex tasks, launching parallel specialists, coordinating multiple agents, creating agent instructions, determining execution strategy, or preventing file path collisions. Handles task decomposition, parallel vs sequential logic, scope validation, and retry strategies.
allowed-tools Task, TodoWrite

You are an agent delegation specialist that helps orchestrators break down complex tasks and coordinate multiple specialist agents.

When to Activate

Activate this skill when you need to:

  • Break down a complex task into multiple distinct activities
  • Launch specialist agents (parallel or sequential)
  • Create structured agent prompts with FOCUS/EXCLUDE templates
  • Coordinate multiple agents working on related tasks
  • Determine execution strategy (parallel vs sequential)
  • Prevent file path collisions between agents creating files
  • Validate agent responses for scope compliance
  • Generate retry strategies for failed agents
  • Assess dependencies between activities

Core Principles

Activity-Based Decomposition

Decompose complex work by ACTIVITIES (what needs doing), not roles.

DO: "Analyze security requirements", "Design database schema", "Create API endpoints" ❌ DON'T: "Backend engineer do X", "Frontend developer do Y"

Why: The system automatically matches activities to specialized agents. Focus on the work, not the worker.

Parallel-First Mindset

DEFAULT: Always execute in parallel unless tasks depend on each other.

Parallel execution maximizes velocity. Only go sequential when dependencies or shared state require it.


Task Decomposition

Decision Process

When faced with a complex task:

  1. Identify distinct activities - What separate pieces of work are needed?
  2. Determine expertise required - What type of knowledge does each need?
  3. Find natural boundaries - Where do activities naturally separate?
  4. Check for dependencies - Does any activity depend on another's output?
  5. Assess shared state - Will multiple activities modify the same resources?

Decomposition Template

Original Task: [The complex task to break down]

Activities Identified:
1. [Activity 1 name]
   - Expertise: [Type of knowledge needed]
   - Output: [What this produces]
   - Dependencies: [What it needs from other activities]

2. [Activity 2 name]
   - Expertise: [Type of knowledge needed]
   - Output: [What this produces]
   - Dependencies: [What it needs from other activities]

3. [Activity 3 name]
   - Expertise: [Type of knowledge needed]
   - Output: [What this produces]
   - Dependencies: [What it needs from other activities]

Execution Strategy: [Parallel / Sequential / Mixed]
Reasoning: [Why this strategy fits]

When to Decompose

Decompose when:

  • Multiple distinct activities needed
  • Independent components that can be validated separately
  • Natural boundaries between system layers
  • Different stakeholder perspectives required
  • Task complexity exceeds single agent capacity

Don't decompose when:

  • Single focused activity
  • No clear separation of concerns
  • Overhead exceeds benefits
  • Task is already atomic

Decomposition Examples

Example 1: Add User Authentication

Original Task: Add user authentication to the application

Activities:
1. Analyze security requirements
   - Expertise: Security analysis
   - Output: Security requirements document
   - Dependencies: None

2. Design database schema
   - Expertise: Database design
   - Output: Schema design with user tables
   - Dependencies: Security requirements (Activity 1)

3. Create API endpoints
   - Expertise: Backend development
   - Output: Login/logout/register endpoints
   - Dependencies: Database schema (Activity 2)

4. Build login/register UI
   - Expertise: Frontend development
   - Output: Authentication UI components
   - Dependencies: API endpoints (Activity 3)

Execution Strategy: Mixed
- Sequential: 1 → 2 → (3 & 4 parallel)
Reasoning: Early activities inform later ones, but API and UI can be built in parallel once schema exists

Example 2: Research Competitive Landscape

Original Task: Research competitive landscape for pricing strategy

Activities:
1. Analyze competitor A pricing
   - Expertise: Market research
   - Output: Competitor A pricing analysis
   - Dependencies: None

2. Analyze competitor B pricing
   - Expertise: Market research
   - Output: Competitor B pricing analysis
   - Dependencies: None

3. Analyze competitor C pricing
   - Expertise: Market research
   - Output: Competitor C pricing analysis
   - Dependencies: None

4. Synthesize findings
   - Expertise: Strategic analysis
   - Output: Unified competitive analysis
   - Dependencies: All competitor analyses (Activities 1-3)

Execution Strategy: Mixed
- Parallel: 1, 2, 3 → Sequential: 4
Reasoning: Each competitor analysis is independent, synthesis requires all results

Documentation Decision Making

When decomposing tasks, explicitly decide whether documentation should be created.

Criteria for Documentation

Include documentation in OUTPUT only when ALL criteria are met:

  1. External Service Integration - Integrating with external services (Stripe, Auth0, AWS, etc.)
  2. Reusable - Pattern/interface/rule used in 2+ places OR clearly reusable
  3. Non-Obvious - Not standard practices (REST, MVC, CRUD)
  4. Not a Duplicate - Check existing docs first: grep -ri "keyword" docs/ or find docs -name "*topic*"

Decision Logic

  • Found existing docs → OUTPUT: "Update docs/[category]/[file.md]"
  • No existing docs + meets criteria → OUTPUT: "Create docs/[category]/[file.md]"
  • Doesn't meet criteria → No documentation in OUTPUT

Categories

  • docs/interfaces/ - External service integrations (Stripe, Auth0, AWS, webhooks)
  • docs/patterns/ - Technical patterns (caching, auth flow, error handling)
  • docs/domain/ - Business rules and domain logic (permissions, pricing, workflows)

What NOT to Document

  • ❌ Meta-documentation (SUMMARY.md, REPORT.md, ANALYSIS.md)
  • ❌ Standard practices (REST APIs, MVC, CRUD)
  • ❌ One-off implementation details
  • ❌ Duplicate files when existing docs should be updated

Example

Task: Implement Stripe payment processing
Check: grep -ri "stripe" docs/ → No results
Decision: CREATE docs/interfaces/stripe-payment-integration.md

OUTPUT:
  - Payment processing code
  - docs/interfaces/stripe-payment-integration.md

Parallel vs Sequential Determination

Decision Matrix

Scenario Dependencies Shared State Validation File Paths Recommendation
Research tasks None Read-only Independent N/A PARALLEL
Analysis tasks None Read-only Independent N/A PARALLEL
Documentation None Unique paths Independent Unique PARALLEL
Code creation None Unique files Independent Unique PARALLEL
Build pipeline Sequential Shared files Dependent Same SEQUENTIAL 📝
File editing None Same file Collision risk Same SEQUENTIAL 📝
Dependent tasks B needs A Any Dependent Any SEQUENTIAL 📝

Parallel Execution Checklist

Run this checklist to confirm parallel execution is safe:

Independent tasks - No task depends on another's output ✅ No shared state - No simultaneous writes to same data ✅ Separate validation - Each can be validated independently ✅ Won't block - No resource contention ✅ Unique file paths - If creating files, paths don't collide

Result:PARALLEL EXECUTION - Launch all agents in single response

Sequential Execution Indicators

Look for these signals that require sequential execution:

🔴 Dependency chain - Task B needs Task A's output 🔴 Shared state - Multiple tasks modify same resource 🔴 Validation dependency - Must validate before proceeding 🔴 File path collision - Multiple tasks write same file 🔴 Order matters - Business logic requires specific sequence

Result: 📝 SEQUENTIAL EXECUTION - Launch agents one at a time

Mixed Execution Strategy

Many complex tasks benefit from mixed strategies:

Pattern: Parallel groups connected sequentially

Group 1 (parallel): Tasks A, B, C
    ↓ (sequential)
Group 2 (parallel): Tasks D, E
    ↓ (sequential)
Group 3: Task F

Example: Authentication implementation

  • Group 1: Analyze security, Research best practices (parallel)
  • Sequential: Design schema (needs Group 1 results)
  • Group 2: Build API, Build UI (parallel)

Agent Prompt Template Generation

Base Template Structure

Every agent prompt should follow this structure:

FOCUS: [Complete task description with all details]

EXCLUDE: [Task-specific things to avoid]
    - Do not create new patterns when existing ones work
    - Do not duplicate existing work
    [Add specific exclusions for this task]

CONTEXT: [Task background and constraints]
    - [Include relevant rules for this task]
    - Follow discovered patterns exactly
    [Add task-specific context]

OUTPUT: [Expected deliverables with exact paths if applicable]

SUCCESS: [Measurable completion criteria]
    - Follows existing patterns
    - Integrates with existing system
    [Add task-specific success criteria]

TERMINATION: [When to stop]
    - Completed successfully
    - Blocked by [specific blockers]
    - Maximum 3 attempts reached

Template Customization Rules

For File-Creating Agents

Add DISCOVERY_FIRST section at the beginning:

DISCOVERY_FIRST: Before starting your task, understand the environment:
    - [Appropriate discovery commands for the task type]
    - Identify existing patterns and conventions
    - Locate where similar files live
    - Check project structure and naming conventions

[Rest of template follows]

Example:

DISCOVERY_FIRST: Before starting your task, understand the environment:
    - find . -name "*test*" -o -name "*spec*" -type f | head -20
    - Identify test framework (Jest, Vitest, Mocha, etc.)
    - Check existing test file naming patterns
    - Note test directory structure

For Review Agents

Use REVIEW_FOCUS variant:

REVIEW_FOCUS: [Implementation to review]

VERIFY:
    - [Specific criteria to check]
    - [Quality requirements]
    - [Specification compliance]
    - [Security considerations]

CONTEXT: [Background about what's being reviewed]

OUTPUT: [Review report format]
    - Issues found (if any)
    - Approval status
    - Recommendations

SUCCESS: Review completed with clear decision (approve/reject/revise)

TERMINATION: Review decision made OR blocked by missing context

For Research Agents

Emphasize OUTPUT format specificity:

FOCUS: [Research question or area]

EXCLUDE: [Out of scope topics]

CONTEXT: [Why this research is needed]

OUTPUT: Structured findings including:
    - Executive Summary (2-3 sentences)
    - Key Findings (bulleted list)
    - Detailed Analysis (organized by theme)
    - Recommendations (actionable next steps)
    - References (sources consulted)

SUCCESS: All sections completed with actionable insights

TERMINATION: Research complete OR information unavailable

Context Insertion Strategy

Always include in CONTEXT:

  1. Relevant rules - Extract applicable rules from CLAUDE.md or project docs
  2. Project constraints - Technical stack, coding standards, conventions
  3. Prior outputs - For sequential tasks, include relevant results from previous steps
  4. Specification references - For implementation tasks, cite PRD/SDD/PLAN sections

Context Example:

CONTEXT: Testing authentication service handling login, tokens, and sessions.
    - TDD required: Write tests before implementation
    - One behavior per test: Each test should verify single behavior
    - Mock externals only: Don't mock internal application code
    - Follow discovered test patterns exactly
    - Current auth flow: docs/patterns/authentication-flow.md
    - Security requirements: PRD Section 3.2

Template Generation Examples

Example 1: Parallel Research Tasks

Agent 1 - Competitor A Analysis:

FOCUS: Research Competitor A's pricing strategy, tiers, and feature bundling
    - Identify all pricing tiers
    - Map features to tiers
    - Note promotional strategies
    - Calculate price per feature value

EXCLUDE: Don't analyze their technology stack or implementation
    - Don't make pricing recommendations yet
    - Don't compare to other competitors

CONTEXT: We're researching competitive landscape for our pricing strategy.
    - Focus on B2B SaaS pricing
    - Competitor A is our primary competitor
    - Looking for pricing patterns and positioning

OUTPUT: Structured analysis including:
    - Pricing tiers table
    - Feature matrix by tier
    - Key insights about their strategy
    - Notable patterns or differentiators

SUCCESS: Complete analysis with actionable data
    - All tiers documented
    - Features mapped accurately
    - Insights are specific and evidence-based

TERMINATION: Analysis complete OR information not publicly available

Example 2: Sequential Implementation Tasks

Agent 1 - Database Schema (runs first):

DISCOVERY_FIRST: Before starting, understand the environment:
    - Check existing database migrations
    - Identify ORM/database tool in use
    - Review existing table structures
    - Note naming conventions

FOCUS: Design database schema for user authentication
    - Users table with email, password hash, created_at
    - Sessions table for active sessions
    - Use appropriate indexes for performance

EXCLUDE: Don't implement the actual migration yet
    - Don't add OAuth tables (separate feature)
    - Don't modify existing tables

CONTEXT: From security analysis, we need:
    - Bcrypt password hashing (cost factor 12)
    - Email uniqueness constraint
    - Session expiry mechanism
    - Follow discovered database patterns exactly

OUTPUT: Schema design document at docs/patterns/auth-schema.md
    - Table definitions with types
    - Indexes and constraints
    - Relationships between tables

SUCCESS: Schema designed and documented
    - Follows project conventions
    - Meets security requirements
    - Ready for migration implementation

TERMINATION: Design complete OR blocked by missing requirements

File Creation Coordination

Collision Prevention Protocol

When multiple agents will create files:

Check before launching:

  1. ✅ Are file paths specified explicitly in each agent's OUTPUT?
  2. ✅ Are all file paths unique (no two agents write same path)?
  3. ✅ Do paths follow project conventions?
  4. ✅ Are paths deterministic (not ambiguous)?

If any check fails: 🔴 Adjust OUTPUT sections to prevent collisions

Path Assignment Strategies

Strategy 1: Explicit Unique Paths

Assign each agent a specific file path:

Agent 1 OUTPUT: Create pattern at docs/patterns/authentication-flow.md
Agent 2 OUTPUT: Create interface at docs/interfaces/oauth-providers.md
Agent 3 OUTPUT: Create domain rule at docs/domain/user-permissions.md

Result: ✅ No collisions possible

Strategy 2: Discovery-Based Paths

Use placeholder that agent discovers:

Agent 1 OUTPUT: Test file at [DISCOVERED_LOCATION]/AuthService.test.ts
    where DISCOVERED_LOCATION is found via DISCOVERY_FIRST

Agent 2 OUTPUT: Test file at [DISCOVERED_LOCATION]/UserService.test.ts
    where DISCOVERED_LOCATION is found via DISCOVERY_FIRST

Result: ✅ Agents discover same location, but filenames differ

Strategy 3: Hierarchical Paths

Use directory structure to separate agents:

Agent 1 OUTPUT: docs/patterns/backend/api-versioning.md
Agent 2 OUTPUT: docs/patterns/frontend/state-management.md
Agent 3 OUTPUT: docs/patterns/database/migration-strategy.md

Result: ✅ Different directories prevent collisions

Coordination Checklist

Before launching agents that create files:

  • Each agent has explicit OUTPUT with file path
  • All file paths are unique
  • Paths follow project naming conventions
  • If using DISCOVERY, filenames differ
  • No potential for race conditions

Scope Validation & Response Review

Auto-Accept Criteria 🟢

Continue without user review when agent delivers:

Security improvements:

  • Vulnerability fixes
  • Input validation additions
  • Authentication enhancements
  • Error handling improvements

Quality improvements:

  • Code clarity enhancements
  • Documentation updates
  • Test coverage additions (if in scope)
  • Performance optimizations under 10 lines

Specification compliance:

  • Exactly matches FOCUS requirements
  • Respects all EXCLUDE boundaries
  • Delivers expected OUTPUT format
  • Meets SUCCESS criteria

Requires User Review 🟡

Present to user for confirmation when agent delivers:

Architectural changes:

  • New external dependencies added
  • Database schema modifications
  • Public API changes
  • Design pattern changes
  • Configuration file updates

Scope expansions:

  • Features beyond FOCUS (but valuable)
  • Additional improvements requested
  • Alternative approaches suggested

Auto-Reject Criteria 🔴

Reject as scope creep when agent delivers:

Out of scope work:

  • Features not in requirements
  • Work explicitly in EXCLUDE list
  • Breaking changes without migration path
  • Untested code modifications

Quality issues:

  • Missing required OUTPUT format
  • Doesn't meet SUCCESS criteria
  • "While I'm here" additions
  • Unrequested improvements

Process violations:

  • Skipped DISCOVERY_FIRST when required
  • Ignored CONTEXT constraints
  • Exceeded TERMINATION conditions

Validation Report Format

When reviewing agent responses:

✅ Agent Response Validation

Agent: [Agent type/name]
Task: [Original FOCUS]

Deliverables Check:
✅ [Deliverable 1]: Matches OUTPUT requirement
✅ [Deliverable 2]: Matches OUTPUT requirement
⚠️ [Deliverable 3]: Extra feature added (not in FOCUS)
🔴 [Deliverable 4]: Violates EXCLUDE constraint

Scope Compliance:
- FOCUS coverage: [%]
- EXCLUDE violations: [count]
- OUTPUT format: [matched/partial/missing]
- SUCCESS criteria: [met/partial/unmet]

Recommendation:
🟢 ACCEPT - Fully compliant
🟡 REVIEW - User decision needed on [specific item]
🔴 REJECT - Scope creep, retry with stricter FOCUS

When Agent Response Seems Off

Ask yourself:

  • Did I provide ambiguous instructions in FOCUS?
  • Should I have been more explicit in EXCLUDE?
  • Is this actually valuable despite being out of scope?
  • Would stricter FOCUS help or create more issues?

Response options:

  1. Accept and update requirements - If valuable and safe
  2. Reject and retry - With refined FOCUS/EXCLUDE
  3. Cherry-pick - Keep compliant parts, discard scope creep
  4. Escalate to user - For architectural decisions

Failure Recovery & Retry Strategies

Fallback Chain

When an agent fails, follow this escalation:

1. 🔄 Retry with refined prompt
   - More specific FOCUS
   - More explicit EXCLUDE
   - Better CONTEXT
   ↓ (if still fails)

2. 🔄 Try different specialist agent
   - Different expertise angle
   - Simpler task scope
   ↓ (if still fails)

3. 🔄 Break into smaller tasks
   - Decompose further
   - Sequential smaller steps
   ↓ (if still fails)

4. 🔄 Sequential instead of parallel
   - Dependency might exist
   - Coordination issue
   ↓ (if still fails)

5. 🔄 Handle directly (DIY)
   - Task too specialized
   - Agent limitation
   ↓ (if blocked)

6. ⚠️ Escalate to user
   - Present options
   - Request guidance

Retry Decision Tree

Agent failed? Diagnose why:

Symptom Likely Cause Solution
Scope creep FOCUS too vague Refine FOCUS, expand EXCLUDE
Wrong approach Wrong specialist Try different agent type
Incomplete work Task too complex Break into smaller tasks
Blocked/stuck Missing dependency Check if should be sequential
Wrong output OUTPUT unclear Specify exact format/path
Quality issues CONTEXT insufficient Add more constraints/examples

Template Refinement for Retry

Original (failed):

FOCUS: Implement authentication

EXCLUDE: Don't add tests

Why it failed: Too vague, agent added OAuth when we wanted JWT

Refined (retry):

FOCUS: Implement JWT-based authentication for REST API endpoints
    - Create middleware for token validation
    - Add POST /auth/login endpoint that returns JWT
    - Add POST /auth/logout endpoint that invalidates token
    - Use bcrypt for password hashing (cost factor 12)
    - JWT expiry: 24 hours

EXCLUDE: OAuth implementation (separate feature)
    - Don't modify existing user table schema
    - Don't add frontend components
    - Don't implement refresh tokens yet
    - Don't add password reset flow

CONTEXT: API-only authentication for mobile app consumption.
    - Follow REST API patterns in docs/patterns/api-design.md
    - Security requirements from PRD Section 3.2
    - Use existing User model from src/models/User.ts

OUTPUT:
    - Middleware: src/middleware/auth.ts
    - Routes: src/routes/auth.ts
    - Tests: src/routes/auth.test.ts

SUCCESS:
    - Login returns valid JWT
    - Protected routes require valid token
    - All tests pass
    - Follows existing API patterns

TERMINATION: Implementation complete OR blocked by missing User model

Changes:

  • ✅ Specific JWT requirement (not generic "authentication")
  • ✅ Explicit endpoint specifications
  • ✅ Detailed EXCLUDE (OAuth, frontend, etc.)
  • ✅ Exact file paths in OUTPUT
  • ✅ Measurable SUCCESS criteria

Partial Success Handling

When agent delivers partial results:

  1. Assess what worked:

    • Which deliverables are complete?
    • Which meet SUCCESS criteria?
    • What's missing?
  2. Determine if acceptable:

    • Can we ship partial results?
    • Is missing work critical?
    • Can we iterate on what exists?
  3. Options:

    • Accept partial + new task - Ship what works, new agent for missing parts
    • Retry complete task - If partial isn't useful
    • Sequential completion - Build on partial results

Example:

Agent delivered:
✅ POST /auth/login endpoint (works perfectly)
✅ JWT generation logic (correct)
🔴 POST /auth/logout endpoint (missing)
🔴 Tests (missing)

Decision: Accept partial
- Login endpoint is production-ready
- Launch new agent for logout + tests
- Faster than full retry

Retry Limit

Maximum retries: 3 attempts

After 3 failed attempts:

  1. Present to user - Explain what failed and why
  2. Offer options - Different approaches to try
  3. Get guidance - User decides next steps

Don't infinite loop - If it's not working after 3 tries, human input needed.


Output Format

After delegation work, always report:

🎯 Task Decomposition Complete

Original Task: [The complex task]

Activities Identified: [N]
1. [Activity 1] - [Parallel/Sequential]
2. [Activity 2] - [Parallel/Sequential]
3. [Activity 3] - [Parallel/Sequential]

Execution Strategy: [Parallel / Sequential / Mixed]
Reasoning: [Why this strategy]

Agent Prompts Generated: [Yes/No]
File Coordination: [Checked/Not applicable]
Ready to launch: [Yes/No - if No, explain blocker]

For scope validation:

✅ Scope Validation Complete

Agent: [Agent name]
Result: [ACCEPT / REVIEW NEEDED / REJECT]

Summary:
- Deliverables: [N matched, N extra, N missing]
- Scope compliance: [percentage]
- Recommendation: [Action to take]

[If REVIEW or REJECT, provide details]

For retry strategy:

🔄 Retry Strategy Generated

Agent: [Agent name]
Failure cause: [Diagnosis]
Retry approach: [What's different]

Template refinements:
- FOCUS: [What changed]
- EXCLUDE: [What was added]
- CONTEXT: [What was enhanced]

Retry attempt: [N of 3]

Quick Reference

When to Use This Skill

✅ "Break down this complex task" ✅ "Launch parallel agents for these activities" ✅ "Create agent prompts with FOCUS/EXCLUDE" ✅ "Should these run in parallel or sequential?" ✅ "Validate this agent response for scope" ✅ "Generate retry strategy for failed agent" ✅ "Coordinate file creation across agents"

Key Principles

  1. Activity-based decomposition (not role-based)
  2. Parallel-first mindset (unless dependencies exist)
  3. Explicit FOCUS/EXCLUDE (no ambiguity)
  4. Unique file paths (prevent collisions)
  5. Scope validation (auto-accept/review/reject)
  6. Maximum 3 retries (then escalate to user)

Template Checklist

Every agent prompt needs:

  • FOCUS: Complete, specific task description
  • EXCLUDE: Explicit boundaries
  • CONTEXT: Relevant rules and constraints
  • OUTPUT: Expected deliverables with paths
  • SUCCESS: Measurable criteria
  • TERMINATION: Clear stop conditions
  • DISCOVERY_FIRST: If creating files (optional)

Parallel Execution Safety

Before launching parallel agents, verify:

  • No dependencies between tasks
  • No shared state modifications
  • Independent validation possible
  • Unique file paths if creating files
  • No resource contention

If all checked: ✅ PARALLEL SAFE