Claude Code Plugins

Community-maintained marketplace

Feedback

coordinating-specialized-agents

@chriscarterux/chris-claude-stack
1
0

This skill should be used when working with multiple Claude Code agents to accomplish complex tasks - teaches systematic agent selection, orchestration patterns (parallel vs sequential), context preparation, and quality review of agent outputs for optimal multi-agent workflows.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name coordinating-specialized-agents
description This skill should be used when working with multiple Claude Code agents to accomplish complex tasks - teaches systematic agent selection, orchestration patterns (parallel vs sequential), context preparation, and quality review of agent outputs for optimal multi-agent workflows.

Coordinating Specialized Agents

Overview

Master the art of orchestrating Claude Code's 70+ specialized agents to accomplish complex tasks efficiently. This skill teaches systematic agent selection, coordination patterns, and quality control for multi-agent workflows.

Core principle: The right agent with the right context produces 10x better results than ad-hoc requests.

When to Use

Use this skill when:

  • Complex tasks require multiple specialized capabilities
  • Deciding which agent(s) to use for a task
  • Chaining agents together for workflows
  • Reviewing and integrating outputs from multiple agents
  • Optimizing multi-agent collaboration
  • Building repeatable agent pipelines

This skill transforms:

  • Confused agent selection → Systematic decision framework
  • Sequential one-by-one → Efficient parallel execution
  • Disconnected outputs → Integrated solutions
  • Trial and error → Proven orchestration patterns

Agent Catalog: Know Your Team

Engineering Agents

Agent Best For Don't Use For
frontend-developer React/Vue/Angular UI, state management, performance Backend logic, databases
backend-architect APIs, databases, server logic, scalability UI components, styling
mobile-app-builder iOS/Android apps, React Native, native features Web-only applications
ai-engineer LLM integration, ML features, embeddings, prompts Basic CRUD, simple logic
devops-automator CI/CD, infrastructure, deployment automation Application code
rapid-prototyper MVPs, POCs, fast iterations Production-grade architecture

Testing & Quality Agents

Agent Best For Don't Use For
test-writer-fixer Writing tests, fixing failures, maintaining test suite Running manual tests
performance-benchmarker Speed testing, profiling, optimization recommendations Functional testing
api-tester API testing, load testing, contract testing UI testing
test-results-analyzer Analyzing failures, finding patterns, quality metrics Writing new tests

Product & Strategy Agents

Agent Best For Don't Use For
trend-researcher Market trends, viral content, opportunities Building features
feedback-synthesizer User feedback analysis, pattern identification Direct user interviews
sprint-prioritizer Feature prioritization, roadmap planning, tradeoffs Implementation
app-store-optimizer ASO, keywords, screenshots, review management App development
tiktok-strategist TikTok marketing, viral content, creator partnerships Other social platforms

Design Agents

Agent Best For Don't Use For
ui-designer Interface design, component systems, visual aesthetics Implementation
ux-researcher User research, journey maps, behavior analysis Design execution
brand-guardian Brand consistency, guidelines, visual identity Ad-hoc design
visual-storyteller Infographics, presentations, visual narratives Code/implementation
accessibility-specialist WCAG compliance, screen readers, inclusive design General UX
whimsy-injector Delightful moments, personality, memorable UX Serious/formal interfaces

Operations Agents

Agent Best For Don't Use For
studio-producer Cross-team coordination, resource allocation, workflow optimization Individual tasks
project-shipper Launch coordination, release management, go-to-market Development work
experiment-tracker A/B tests, feature experiments, data analysis Implementation
workflow-optimizer Process improvement, bottleneck identification One-off tasks
infrastructure-maintainer System health, performance, scaling, reliability Application features

Business & Support Agents

Agent Best For Don't Use For
finance-tracker Budgets, costs, revenue forecasting, financial analysis Accounting
analytics-reporter Metrics, insights, performance reports, dashboards Data collection setup
support-responder Customer support, documentation, automated responses Sales
legal-compliance-checker Terms of service, privacy policies, regulatory compliance Legal advice

Meta Agents

Agent Best For Don't Use For
studio-coach Agent coordination, motivation, performance coaching Direct implementation
tool-evaluator Tool assessment, comparison, recommendations Using tools
Explore Codebase exploration, finding files, understanding structure Making changes
joker Humor, morale, fun error messages Serious documentation

Agent Selection Framework

Decision Tree

What's the primary goal?

├─ Build something new
│  ├─ MVP/prototype → rapid-prototyper
│  ├─ Frontend UI → ui-designer then frontend-developer
│  ├─ Backend API → backend-architect
│  ├─ Mobile app → mobile-app-builder
│  └─ AI feature → ai-engineer
│
├─ Test/verify something
│  ├─ Write tests → test-writer-fixer
│  ├─ Performance → performance-benchmarker
│  ├─ API testing → api-tester
│  └─ Analyze results → test-results-analyzer
│
├─ Understand something
│  ├─ Codebase → Explore
│  ├─ User needs → ux-researcher
│  ├─ Market trends → trend-researcher
│  └─ User feedback → feedback-synthesizer
│
├─ Optimize something
│  ├─ Workflow → workflow-optimizer
│  ├─ Infrastructure → infrastructure-maintainer
│  ├─ Conversion → app-store-optimizer (apps) or tiktok-strategist (content)
│  └─ Costs → finance-tracker
│
├─ Ship/launch something
│  ├─ Coordinate launch → project-shipper
│  ├─ Set up deployment → devops-automator
│  └─ Track experiment → experiment-tracker
│
└─ Coordinate agents
   └─ studio-coach

Complexity-Based Selection

Simple Task (1 agent):

  • "Add a button to the homepage" → frontend-developer
  • "Write tests for user login" → test-writer-fixer
  • "Analyze app store keywords" → app-store-optimizer

Medium Task (2-3 agents):

  • "Build a new feature"
    1. ui-designer → design interface
    2. frontend-developer → implement
    3. test-writer-fixer → add tests

Complex Task (4+ agents):

  • "Launch a new product"
    1. trend-researcher → market validation
    2. ui-designer → interface design
    3. rapid-prototyper → MVP
    4. test-writer-fixer → quality assurance
    5. devops-automator → deployment setup
    6. project-shipper → launch coordination

Orchestration Patterns

Pattern 1: Sequential Pipeline

When: Each step depends on the previous one

Example: Feature Development

1. ux-researcher → identify user needs
2. ui-designer → create designs
3. frontend-developer → implement UI
4. backend-architect → build API
5. test-writer-fixer → comprehensive tests
6. devops-automator → deployment

How to execute:

Step 1: @ux-researcher research user login pain points
[Wait for output, review findings]

Step 2: @ui-designer design login flow based on: [ux findings]
[Wait for output, review designs]

Step 3: @frontend-developer implement this login UI: [design specs]
[Continue sequential pattern...]

Pattern 2: Parallel Execution

When: Tasks are independent and can happen simultaneously

Example: Preparing for Launch

Parallel tracks:
├─ Track A: @app-store-optimizer optimize listing
├─ Track B: @tiktok-strategist create content plan
├─ Track C: @support-responder set up help docs
└─ Track D: @legal-compliance-checker review policies

How to execute:

Launch all agents simultaneously (mention all in one message):

@app-store-optimizer create app store listing for [product]
@tiktok-strategist develop TikTok strategy for [product]
@support-responder create support documentation for [product]
@legal-compliance-checker review terms and privacy policy

[All agents work in parallel, review outputs together]

Pattern 3: Iterative Refinement

When: Need progressive improvement through multiple passes

Example: Design Iteration

Round 1:
ui-designer → initial designs
↓
ux-researcher → user testing feedback
↓
ui-designer → refined designs
↓
accessibility-specialist → accessibility review
↓
ui-designer → final accessible designs

How to execute:

@ui-designer create dashboard design for [use case]
[Review output]

@ux-researcher analyze this design for usability issues: [design]
[Review feedback]

@ui-designer refine dashboard based on: [ux feedback]
[Review refined design]

@accessibility-specialist audit for WCAG compliance: [refined design]
[Review accessibility recommendations]

@ui-designer final pass incorporating: [accessibility fixes]

Pattern 4: Specialist Review Chain

When: Multiple perspectives needed for quality

Example: Code Review Process

frontend-developer → implementation
↓
performance-benchmarker → performance review
↓
accessibility-specialist → accessibility review
↓
test-writer-fixer → test coverage review
↓
[Integrate all feedback]

Pattern 5: Fan-Out/Fan-In

When: Multiple parallel analyses converge into decision

Example: Technology Selection

Fan-out (parallel):
├─ tool-evaluator → evaluate framework options
├─ finance-tracker → cost analysis
└─ devops-automator → deployment feasibility

Fan-in:
└─ Synthesize all inputs → make decision

How to execute:

// Fan-out (one message, multiple agents)
@tool-evaluator compare Next.js vs Remix vs Astro for [use case]
@finance-tracker analyze hosting costs for each framework
@devops-automator assess deployment complexity for each

[Wait for all three responses]

// Fan-in (synthesize yourself or use studio-coach)
Based on outputs: [tool evaluation] [cost analysis] [deployment complexity]
Decision: [choose framework with justification]

Pattern 6: Specialist Consultation

When: Need expert input mid-workflow

Example: Adding AI Feature

frontend-developer → building feature
[Realizes AI integration needed]
↓
@ai-engineer how should I integrate Claude API for [use case]?
[Get recommendations]
↓
Continue with frontend-developer using AI guidance

Context Preparation: Setting Agents Up for Success

Essential Context Elements

1. Clear Objective

❌ Bad: "@frontend-developer build the thing"
✅ Good: "@frontend-developer create a responsive dashboard showing user analytics with 3 key metrics"

2. Relevant Files/Code

❌ Bad: "The code is in the repo somewhere"
✅ Good: "@frontend-developer update src/components/Dashboard.tsx to add real-time updates"

3. Constraints & Requirements

❌ Bad: "Make it work"
✅ Good: "@frontend-developer implement using shadcn/ui components, must work on mobile, keep bundle size under 100KB"

4. Background/Context

❌ Bad: "Add this feature"
✅ Good: "@frontend-developer users complained about slow dashboard loads (3s+). Add virtualization for the 1000-row table using react-virtual"

Context Preparation Checklist

Before calling an agent:

  • What exactly do I need? (clear objective)
  • What files/code are relevant? (@mention them)
  • What constraints exist? (tech stack, performance, design system)
  • What background helps? (why this is needed, user impact)
  • What's the success criteria? (how to know it's done)

Quality Control: Reviewing Agent Outputs

The 3-Check Review System

Check 1: Did the agent understand the task?

  • Does output address the core objective?
  • Are requirements met?
  • Is scope correct (not too narrow/broad)?

Check 2: Is the output quality high?

  • Technical correctness
  • Best practices followed
  • Edge cases considered
  • Performance acceptable

Check 3: Does it integrate well?

  • Fits with existing code/design
  • Consistent with project patterns
  • No conflicts with other components
  • Documentation adequate

When to Request Revisions

Immediate revision needed:

  • Core requirement missed
  • Technical errors
  • Security issues
  • Performance problems

Consider accepting:

  • Minor style differences
  • Alternative but valid approaches
  • "Good enough" for MVP
  • Time constraints

How to request revisions:

✅ Good: "@frontend-developer the dashboard works but needs these changes:
1. Use shadcn/ui Card component instead of custom div
2. Add loading state for API calls
3. Make table sortable
Please update"

❌ Bad: "This isn't what I wanted, redo it"

Common Multi-Agent Workflows

Workflow 1: New Feature End-to-End

1. @ux-researcher validate feature need
2. @sprint-prioritizer determine if priority (optional)
3. @ui-designer create interface designs
4. @frontend-developer implement UI
5. @backend-architect build API (parallel with #4 if separate)
6. @test-writer-fixer add test coverage
7. @performance-benchmarker verify performance
8. @devops-automator set up deployment

Workflow 2: Bug Investigation & Fix

1. @Explore find relevant code
2. @test-writer-fixer reproduce with failing test
3. @backend-architect (or frontend-developer) implement fix
4. @test-writer-fixer verify fix and update tests
5. @devops-automator deploy hotfix

Workflow 3: Launch Preparation

Parallel tracks:
├─ @devops-automator: infrastructure ready
├─ @app-store-optimizer: store listing optimized
├─ @support-responder: help docs created
├─ @legal-compliance-checker: legal review done
└─ @analytics-reporter: tracking set up

Then:
@project-shipper: coordinate launch with all pieces ready

Workflow 4: Performance Optimization

1. @performance-benchmarker: identify bottlenecks
2. Parallel improvements:
   ├─ @frontend-developer: optimize UI rendering
   ├─ @backend-architect: optimize database queries
   └─ @infrastructure-maintainer: scale infrastructure
3. @performance-benchmarker: verify improvements

Workflow 5: Design System Implementation

1. @brand-guardian: define brand guidelines
2. @ui-designer: create component designs
3. @accessibility-specialist: accessibility review
4. @frontend-developer: implement shadcn/ui-based system
5. @whimsy-injector: add delightful touches
6. Deploy system:
   @frontend-developer: integrate into existing pages

Agent Handoff Patterns

Clean Handoff (Best Practice)

Step 1: @ui-designer
"Design a user profile page with avatar, bio, stats section"

[Agent produces design]

Step 2: @frontend-developer
"Implement this design: [design specs from ui-designer]
Use shadcn/ui components
Make responsive (mobile-first)"

[Agent implements]

Step 3: @test-writer-fixer
"Add tests for this component: [implementation from frontend-developer]
Cover: rendering, user interactions, responsive behavior"

Messy Handoff (Avoid)

❌ "Someone build me a profile page" (unclear who)
❌ "@frontend-developer make it pretty" (wrong agent for design)
❌ Pass half-baked output to next agent without review

Parallel Agent Coordination

Running Agents in Parallel

When to parallelize:

  • Independent tasks
  • Different domains (design + backend)
  • Time-sensitive work
  • Multiple perspectives needed

How to coordinate:

Single message with multiple @mentions:

@ui-designer create landing page hero section
@tiktok-strategist develop content strategy for launch
@app-store-optimizer optimize app store presence
@support-responder write FAQ for common questions

[All agents work simultaneously]
[Review all outputs]
[Integrate findings]

Managing Parallel Outputs

Integration strategy:

  1. Let all agents complete
  2. Review each output individually
  3. Identify conflicts or overlaps
  4. Synthesize into coherent plan
  5. Follow up with specific agents for adjustments

Example:

ui-designer suggests blue branding
brand-guardian enforces existing orange brand
→ Resolve: Follow brand-guardian (brand consistency > new ideas)

backend-architect proposes Postgres
finance-tracker shows MongoDB is cheaper
→ Resolve: Depends on priority (cost vs features)

Troubleshooting Agent Coordination

Problem Solution
Agent misunderstood task Provide more context, be more specific
Output doesn't match style @mention existing code/patterns to follow
Agents giving conflicting advice Decide based on your priorities, or use studio-coach
Too many agents, confused Simplify: start with one agent, add others as needed
Agent went too broad/narrow Clarify scope explicitly in request
Output quality low Check if right agent for task, improve context

Advanced Patterns

Meta-Coordination with studio-coach

When: Coordinating 5+ agents or complex multi-phase projects

@studio-coach help me coordinate agents for: [complex project]

Available agents: [list relevant ones]
Timeline: [deadline]
Priorities: [what matters most]

[studio-coach will suggest optimal agent coordination strategy]

Dynamic Agent Selection

When: Unclear which agent is best

Option 1: Ask Explore first
@Explore find the code responsible for [feature]
[Based on findings, select appropriate agent]

Option 2: Start broad, narrow down
Start with general-purpose agent
If specialized skill needed → switch to specialist agent

Agent Pipeline Templates

Save your successful patterns:

## Template: Feature Launch

1. @trend-researcher validate market demand
2. @ui-designer create designs
3. @rapid-prototyper build MVP
4. Parallel:
   - @test-writer-fixer add tests
   - @devops-automator setup deployment
5. @project-shipper coordinate launch

Best Practices Summary

Do:

  • ✅ Choose the most specialized agent for the task
  • ✅ Provide clear, specific objectives
  • ✅ @mention relevant files and context
  • ✅ Review outputs before passing to next agent
  • ✅ Run independent tasks in parallel
  • ✅ Use sequential chains for dependencies

Don't:

  • ❌ Use generic agent when specialist exists
  • ❌ Vague requests without context
  • ❌ Blind handoffs without reviewing output
  • ❌ Sequential execution of independent tasks
  • ❌ Asking one agent to do everything
  • ❌ Skipping quality checks

Success Metrics

You're coordinating well when:

  • Right agent selected on first try (not trial and error)
  • Minimal revision cycles (good context = good output)
  • Smooth handoffs between agents
  • Parallel execution saving time
  • High-quality integrated outputs
  • Predictable, repeatable workflows

Signs you need to improve:

  • Constantly switching agents mid-task
  • Multiple revision requests
  • Conflicting outputs from different agents
  • Everything sequential (slow)
  • Outputs don't integrate well
  • Every project feels novel

Quick Reference

Before starting any task:

  1. What's the goal? (be specific)
  2. Which agent(s) are best? (use decision tree)
  3. What context do they need? (files, constraints, background)
  4. Sequential or parallel? (dependencies vs independence)
  5. How will I review outputs? (success criteria)

The golden rule of agent coordination: The right specialist with the right context beats a generalist every time.

Master these patterns and you'll accomplish in hours what used to take days, with higher quality and less stress.