Claude Code Plugins

Community-maintained marketplace

Feedback

Orchestrate parallel execution of multiple CLI agents (Claude Code, Codex, Gemini) for competitive evaluation of complex tasks. Use when user says "run multi-agent framework", "compare agents", "launch competitive evaluation", "use parallel agents", or requests multiple approaches for tasks with complexity >7/10 where multiple valid implementation strategies exist and best solution matters.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name multi-agent-orchestrator
description Orchestrate parallel CLI agents (Claude Code, Codex, Gemini) for competitive evaluation. Use when user says "run multi-agent", "compare agents", "launch competitive evaluation", "use parallel agents", or complex tasks (>7/10) where multiple approaches exist and best solution matters.
version 2.0.0

πŸ“‹ Multi-Agent Orchestrator = Competitive Parallel Execution

Core Principle: Launch N CLI agents (Claude Code, Codex, Gemini) with identical task β†’ Compare self-evaluations β†’ Declare winner based on measurable success criteria. πŸ”΄ CRITICAL: NEVER MOCK DATA! Try multiple approaches to get real data; if all fail, stop and document attempts.

Multi-Agent Workflow Structure (Continuant - TD):

graph TD
    Task[Task File] --> Claude[Claude Code Workspace]
    Task --> Codex[Codex CLI Workspace]
    Task --> Gemini[Gemini Workspace]

    Claude --> CPlan[01_plan_claude_code.md]
    Claude --> CResults[90_results_claude_code.md]
    Claude --> CArtifacts[All Artifacts]

    Codex --> DPlan[01_plan_codex.md]
    Codex --> DResults[90_results_codex.md]

    Gemini --> GPlan[01_plan_gemini.md]
    Gemini --> GResults[90_results_gemini.md]

Orchestration Process Flow (Occurrent - LR):

graph LR
    A[Agree on Folder] --> B[Create Draft Task]
    B --> C[User Edits File]
    C --> D[User Says Ready]
    D --> E[Launch Parallel Agents]
    E --> F[Monitor Plan Files]
    F --> G[Compare 90_results]
    G --> H[Declare Winner]

Ontological Rule: TD for workspace structure (what exists), LR for orchestration workflow (what happens)

Primary source: algorithms/product_div/Multi_agent_framework/00_MULTI_AGENT_ORCHESTRATOR.md Session ID: e9ce3592-bd66-4a98-b0e7-fcdd8edb5d42 by Daniel Kravtsov (2025-11-13) - v2.0.0 Release log: See SKILL_RELEASE_LOG.md for full version history

🎯 When to Use

ΒΆ1 Use multi-agent framework when:

  • Task complexity >7/10
  • Multiple valid implementation approaches exist
  • Need competitive evaluation
  • Best solution critically important

ΒΆ2 Use Task tool sub-agents when:

  • Single specialized capability (gmail, notion, jira)
  • Standard workflow exists
  • Quick operation needed
  • Complexity <5/10

πŸ“ Setup Workflow

ΒΆ1 MANDATORY FIRST STEP: Agree on Location

Before creating anything, ask:

  • "Where should I create this task folder?" (suggest 2-3 options based on task type)
  • "What should the folder name be?" (format: XX_descriptive_name)

Example:

πŸ€–: "For your task, I suggest:
   1. /client_cases/[client]/15_[task]/ (if client-specific)
   2. /algorithms/product_div/15_[task]/ (if algorithm)

   Which location? And folder name?"

πŸ‘€: "Use client_cases/HP/15_customer_metrics/"

πŸ€–: "βœ… Creating task in: /client_cases/HP/15_customer_metrics/"

ΒΆ2 Create Draft Task File Immediately

After folder agreement, create quick draft - user will edit directly:

mkdir -p [agreed_path]
cd [agreed_path]

cat > 01_task_multi_agent.md << 'EOF'
## Task: [Your quick understanding]

**Success Criteria:** [DRAFT - user refines]
- [Draft criterion 1]
- [Draft criterion 2]

## Instructions for User:
1. πŸ“ EDIT THIS FILE - Add details, fix criteria
2. βœ… CONFIRM - Reply "Ready" when good
3. πŸ”„ ITERATE - Edit and reply with changes

**Current Status:** πŸ”„ AWAITING YOUR EDITS

## Agents Artifact Requirement
Each agent MUST create:
- `01_plan_[agent].md` - Planning with progress updates
- `90_results_[agent].md` - Results with self-evaluation
- All outputs in workspace folder (claude_code/, codex_cli/, gemini/)

**Self-Evaluation Format:**
### Criterion 1: [from task]
**Status:** βœ…/❌/⚠️ | **Evidence:** [data] | **Details:** [how tested]

## Overall: X/Y criteria met | Grade: βœ…/❌/⚠️
EOF

mkdir -p claude_code codex_cli gemini
cd ..

echo "πŸ“„ Task file: [agreed_path]/01_task_multi_agent.md"
echo "πŸ”— file://[full_path]"

ΒΆ3 User Edits Task File

User has full control - edits file in IDE. No chat back-and-forth!

User workflow:

  1. Open file (link provided)
  2. Edit directly - improve description, refine criteria
  3. Reply "Ready" or "Change criterion #2 to: [text]"

ΒΆ4 Wait for Confirmation

DO NOT PROCEED until user says "Ready".

Acceptable:

  • βœ… "Ready"
  • βœ… "Ready with changes: [edits]"
  • βœ… "Change criterion #2 to: [text]"

πŸ”„ Execution

ΒΆ1 Launch Parallel Agents

When user says "Ready":

# Run in background
./run_parallel_agents.sh [agreed_path]/01_task_multi_agent.md &
SCRIPT_PID=$!

# Monitor progress
ps aux | grep $SCRIPT_PID
tail -f [task_folder]/*/claude_output.log

Script location: algorithms/product_div/Multi_agent_framework/run_parallel_agents.sh

Scripts handle automatically:

  • βœ… Repository root execution
  • βœ… .env file loading
  • βœ… Workspace setup/cleanup
  • βœ… Background process management
  • βœ… Real-time monitoring (updates every 5s)

Timing:

  • Codex: 2-3 min
  • Claude: 5+ min
  • Gemini: 3-5 min

ΒΆ2 Monitor via Plan Files

Track progress:

cat [agreed_path]/claude_code/01_*_plan_claude_code.md
cat [agreed_path]/codex_cli/01_*_plan_codex.md
cat [agreed_path]/gemini/01_*_plan_gemini.md

ΒΆ3 Artifact Placement (CRITICAL)

πŸ”΄ ALL ARTIFACTS MUST BE IN AGENT WORKSPACE FOLDER

Every agent MUST create ALL outputs in assigned workspace - NEVER in external directories.

❌ WRONG:

[task]/
β”œβ”€β”€ claude_code/
β”‚   β”œβ”€β”€ 01_plan.md βœ…
β”‚   └── 90_results.md βœ…
β”œβ”€β”€ data_processed/
β”‚   └── output.csv ❌ WRONG!
└── results.json ❌ WRONG!

βœ… CORRECT:

[task]/
β”œβ”€β”€ claude_code/
β”‚   β”œβ”€β”€ 01_plan.md βœ…
β”‚   β”œβ”€β”€ 90_results.md βœ…
β”‚   β”œβ”€β”€ output.csv βœ…
β”‚   β”œβ”€β”€ results.json βœ…
β”‚   └── script.py βœ…
└── codex_cli/
    β”œβ”€β”€ 01_plan.md βœ…
    └── 90_results.md βœ…

Why:

  1. Traceability - know which agent created what
  2. Comparison - side-by-side outputs
  3. Cleanup - delete failed results cleanly
  4. Reproducibility - exact inputs/outputs

ΒΆ4 Compare Self-Evaluations

No manual testing - compare 90_results_*.md files only:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Success Criteria  β”‚ Claude  β”‚ Codex β”‚ Gemini β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ Process <5s       β”‚ ❌ 6.2s β”‚ βœ… 3.8β”‚ βœ… 4.1 β”‚
β”‚ Handle bad data   β”‚ βœ…      β”‚ βœ…    β”‚ βœ…     β”‚
β”‚ Unique approach   β”‚ ❌      β”‚ βœ…    β”‚ βœ…     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ CRITERIA MET      β”‚ 1/3     β”‚ 3/3   β”‚ 3/3    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜
πŸ† WINNER: Tie Codex/Gemini

Winner = highest score (most βœ… criteria).

πŸ”— Scripts & References

ΒΆ1 Ready-to-use scripts:

Main (recommended):

./run_parallel_agents.sh task_file.md

Individual:

./run_claude_agent.sh task_file.md
./run_codex_agent.sh task_file.md
./run_gemini_agent.sh task_file.md

ΒΆ2 Bundled resources:

Scripts:

  • scripts/create_task_file.sh - Generate standardized task files

References:

  • references/script_usage.md - Detailed script documentation
  • references/task_templates.md - Pre-built templates for common scenarios
  • algorithms/product_div/Multi_agent_framework/00_MULTI_AGENT_ORCHESTRATOR.md - Full guide

When to load:

  • Script errors β†’ Load script_usage.md
  • Task templates β†’ Load task_templates.md
  • Comprehensive understanding β†’ Load 00_MULTI_AGENT_ORCHESTRATOR.md

❌ Anti-Patterns

ΒΆ1 Common mistakes:

❌ Using for simple tasks (just do directly) ❌ No clear success criteria (vague goals β†’ vague results) ❌ Mocking data (NEVER create fake data) ❌ Skipping user confirmation (always wait for "Ready") ❌ External artifacts (all outputs in workspace folders) ❌ Subjective evaluation (use measurable criteria only)

βœ… Quick Reference

ΒΆ1 Complete workflow:

1. User describes complex task
2. Verify complexity >7/10
3. Agree on folder location
4. Create draft task file
5. User edits and confirms "Ready"
6. Launch ./run_parallel_agents.sh &
7. Monitor plan files
8. Compare 90_results_*.md
9. Declare winner by criteria met
10. Document results

ΒΆ2 File templates:

# 01_plan_[agent].md
## My Approach ([agent])
- [ ] Step 1: [action]
## Progress: βœ… [timestamp] Step 1 complete

# 90_results_[agent].md
## Self-Evaluation ([agent])
### Criterion 1: [from task]
**Status:** βœ…/❌/⚠️ | **Evidence:** [data] | **Details:** [tested how]
## Overall: X/Y criteria | Grade: βœ…/❌/⚠️

ΒΆ3 Folder structure:

[agreed_path]/
β”œβ”€β”€ 01_task_multi_agent.md    # User-editable
β”œβ”€β”€ claude_code/              # Claude workspace
β”‚   β”œβ”€β”€ 01_*_plan_claude.md
β”‚   └── 90_*_results_claude.md
β”œβ”€β”€ codex_cli/                # Codex workspace
β”‚   β”œβ”€β”€ 01_*_plan_codex.md
β”‚   └── 90_*_results_codex.md
└── gemini/                   # Gemini workspace
    β”œβ”€β”€ 01_*_plan_gemini.md
    └── 90_*_results_gemini.md

Meta Note: See knowledge-framework skill for MECE/BFO principles. Multi-agent orchestrator uses CLI agents (not sub-agents), requires measurable success criteria, and selects winner through objective self-evaluation comparison.