| name | subagents |
| description | This skill should be used when determining patterns for parallel and sequential subagent usage. Use when planning subagent orchestration or briefing subagents on tasks. |
Subagent Patterns
Purpose
Standardize how subagents are used for efficiency and consistency.
Subagent Types
Explorer - Codebase research and discovery
- Use for: "Find all files that...", "How does X work?", "What pattern does this project use for Y?"
- Fast, read-only, returns summary
- Parallelizable: Yes
Implementer - Write code for bounded task
- Use for: "Create component X", "Add function Y", "Write tests for Z"
- Gets: spec section, file context, success criteria
- Returns: code changes, summary
- Parallelizable: Yes, if tasks are independent
Reviewer - Analyze code with fresh eyes
- Use for: Code review, security audit, spec compliance check
- Gets: diff/code, spec, conventions
- Does NOT get: implementer's reasoning
- Returns: review notes, issues found
- Parallelizable: Can run multiple focused reviews in parallel
Researcher - External information gathering
- Use for: Documentation lookup, API research, best practices
- Uses: WebSearch, WebFetch, MCP servers for external data
- Returns: summarized findings, recommendations
- Parallelizable: Yes
Parallel Execution Pattern
When tasks are independent:
Identify independent tasks
↓
Spawn subagents in single message (parallel)
↓
Collect all results
↓
Synthesize/integrate in main agent
Example - implementing 3 independent components:
[Main agent]
├── [Subagent 1] → Component A
├── [Subagent 2] → Component B
└── [Subagent 3] → Component C
↓
[Main agent] integrates results, handles cross-cutting concerns
Sequential Execution Pattern
When tasks have dependencies:
Task A (no dependencies)
↓ output feeds into
Task B (depends on A)
↓ output feeds into
Task C (depends on B)
Subagent Briefing Template
## Task
<one sentence: what to do>
## Context
<relevant spec section or summary>
## Files to Read
- <explicit file list>
## Success Criteria
- <how to know when done>
- <what output to produce>
## Constraints
- <scope boundaries>
- <patterns to follow>
- <things to avoid>
## Output Location
<where to write results, if applicable>
Code-Based Tool Orchestration
When a subagent (or main agent) needs to perform multiple tool operations:
Don't:
# Each call adds to context
results1 = Read(file1) # → context
results2 = Read(file2) # → context
results3 = Grep(pattern) # → context
# Context now bloated with all raw results
Do:
# Write a script that processes and filters
script = """
import json
files = ['file1.py', 'file2.py', 'file3.py']
findings = []
for f in files:
content = open(f).read()
if 'relevant_pattern' in content:
findings.append({'file': f, 'summary': extract_summary(content)})
print(json.dumps(findings))
"""
# Run script → only filtered findings enter context
When to use programmatic orchestration:
- Reading/processing more than 2-3 files
- Search operations that may return many results
- Any multi-step workflow where intermediate data isn't needed
- Data transformation or filtering operations
When direct tool calls are fine:
- Single file read
- One-off search with expected small results
- Operations where full context is genuinely needed
Anti-Patterns
Don't:
- Spawn subagent for trivial tasks (adds overhead)
- Give subagent entire conversation context (wastes tokens)
- Let subagent make architectural decisions (escalate instead)
- Spawn subagent without clear success criteria (will flounder)
- Use subagent when main agent has the context (unnecessary indirection)
Do:
- Parallelize independent exploration/research tasks
- Use implementer subagents for isolated code units
- Use reviewer subagent separate from implementer (fresh eyes)
- Provide explicit, bounded tasks with clear deliverables