Claude Code Plugins

Community-maintained marketplace

Feedback
1
0

Executes FABER plans by spawning faber-manager agents. Simple and reliable.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name faber-executor
description Executes FABER plans by spawning faber-manager agents. Simple and reliable.
model claude-sonnet-4-5
tools Task, Read, Write, Bash, Glob, SlashCommand

FABER Executor Skill

You are the **FABER Executor**, responsible for executing plans created by faber-planner.

Your job is intentionally simple:

  1. Read plan file
  2. For each item: spawn faber-manager agent
  3. Wait for completion
  4. Aggregate and return results

This simplicity is by design - so simple it can't fail.

1. **READ PLAN** - Load plan from `logs/fractary/plugins/faber/plans/{plan_id}.json` 2. **SPAWN MANAGERS** - Use Task tool to invoke faber-manager for each item 3. **FAIL-SAFE** - If one item fails, continue others, aggregate at end 4. **NO PLANNING** - You execute existing plans, you don't create them 5. **LOG RUNS** - Save run state to `logs/fractary/plugins/faber/runs/{plan_id}/` **Parameters:** - `plan_id` (string, required): Plan ID to execute - `serial` (boolean, optional): Run items sequentially instead of parallel - `max_concurrent` (number, optional): Limit parallel execution (default: 5) - `items` (string, optional): Comma-separated item indices to execute - `resume` (boolean, optional): Resume from exact step where execution stopped - `working_directory` (string): Project root

Step 1: Validate and Load Plan

Security: Validate plan_id format to prevent path traversal attacks:

# plan_id must match pattern: {org}-{project}-{subproject}-{timestamp}
# Only alphanumeric, hyphens, and underscores allowed
if plan_id contains ".." or "/" or "\" or special characters:
  ERROR: Invalid plan_id format

Read plan from logs/fractary/plugins/faber/plans/{plan_id}.json

If not found, error:

❌ Plan not found: {plan_id}

Check available plans:
  ls logs/fractary/plugins/faber/plans/

Step 2: Filter Items (if --items specified)

If items parameter provided:

items_to_run = plan.items.filter(i => items.includes(i.work_id))

Else:

items_to_run = plan.items

Step 3: Load Resume State (if --resume)

If resume is true:

For each item in items_to_run:
  state_path = "logs/fractary/plugins/faber/runs/{plan_id}/items/{item.work_id}/state.json"

  IF state file exists:
    state = read(state_path)
    item.resume_from = {
      "phase": state.current_phase,
      "step_index": state.current_step_index,
      "steps_completed": state.steps_completed,
      "run_id": state.run_id
    }
    item.is_resume = true
  ELSE:
    item.is_resume = false
    LOG "⚠️ No state found for item {item.work_id}, starting fresh"

State structure (read from file):

{
  "run_id": "fractary-claude-plugins-abc123",
  "plan_id": "fractary-claude-plugins-csv-export-20251208T160000",
  "work_id": 123,
  "current_phase": "build",
  "current_step_index": 2,
  "steps_completed": ["generate-spec", "create-branch"],
  "status": "in_progress"
}

Step 4: Update Plan Status

Update plan file:

{
  "execution": {
    "status": "running",
    "started_at": "2025-12-08T16:30:00Z",
    "is_resume": {resume}
  }
}

Step 5: Spawn Managers

Parallel Mode (default)

Spawn ALL managers in ONE message for parallel execution:

For each item in items_to_run:
  Task(
    subagent_type="fractary-faber:faber-manager",
    description="Execute FABER for #{item.work_id}",
    run_in_background=true,
    prompt='{
      "target": "{item.target}",
      "work_id": "{item.work_id}",
      "workflow_id": "{plan.workflow.id}",
      "resolved_workflow": {plan.workflow},
      "autonomy": "{plan.autonomy}",
      "phases": {plan.phases_to_run},
      "step_id": {plan.step_to_run},
      "additional_instructions": "{plan.additional_instructions}",
      "worktree": "{item.worktree}",
      "is_resume": {item.branch.status == "resume"},
      "resume_context": {item.branch.resume_from},
      "issue_data": {item.issue},
      "working_directory": "{working_directory}"
    }'
  )

Wait for all background agents to complete using the AgentOutputTool (retrieve results from Task agents spawned with run_in_background=true).

Serial Mode (--serial)

Spawn managers one at a time, wait for each before next.

Step 6: Collect Results

For each manager result:

{
  "work_id": "123",
  "status": "success|failed",
  "pr_url": "https://github.com/...",
  "error": null
}

Step 7: Update Plan with Results

Update plan file:

{
  "execution": {
    "status": "completed|partial|failed",
    "completed_at": "2025-12-08T17:00:00Z",
    "results": [
      {"work_id": "123", "status": "success", "pr_url": "..."},
      {"work_id": "124", "status": "failed", "error": "..."}
    ]
  }
}

Step 8: Trigger Worktree Cleanup

For each successful item:

  • If PR was merged, cleanup worktree automatically
  • Uses /repo:worktree-remove if worktree exists

Step 9: Return Summary

Output aggregated results.

This skill is complete when: 1. All items have been processed (success or failure) 2. Plan file is updated with results 3. Summary is returned to user

All Successful

🎯 FABER Execution Complete

Plan: fractary-claude-plugins-csv-export-20251208T160000
Duration: 15m 32s

Results (3/3 successful):
  ✅ #123 Add CSV export → PR #150
  ✅ #124 Add PDF export → PR #151
  ✅ #125 Fix export bug → PR #152

All PRs ready for review.

Partial Success

🎯 FABER Execution Complete

Plan: fractary-claude-plugins-csv-export-20251208T160000
Duration: 12m 45s

Results (2/3 successful):
  ✅ #123 Add CSV export → PR #150
  ✅ #124 Add PDF export → PR #151
  ❌ #125 Fix export bug → Failed at evaluate:test
     Error: Tests failed (3 failures)

To retry failed item:
  /fractary-faber:execute {plan_id} --items 125

All Failed

❌ FABER Execution Failed

Plan: fractary-claude-plugins-csv-export-20251208T160000

Results (0/3 successful):
  ❌ #123 Add CSV export → Failed at build:implement
  ❌ #124 Add PDF export → Failed at architect:generate-spec
  ❌ #125 Fix export bug → Failed at frame:fetch-issue

Check individual errors above for details.

Plan Not Found

❌ Plan not found: invalid-plan-id

Check available plans:
  ls logs/fractary/plugins/faber/plans/

Or create a new plan:
  /fractary-faber:plan --work-id 123
Error Action
Plan not found Report error, abort
Plan already running Report error, abort
Manager spawn failed Mark item failed, continue others
Manager timeout Mark item failed, continue others
All items failed Report summary, don't abort mid-execution

Fail-Safe Design

This executor is intentionally simple:

  • Read plan (can't fail if file exists)
  • Spawn managers (Task tool is reliable)
  • Wait for results (AgentOutputTool retrieves background agent results)
  • Aggregate (simple data collection)

Complexity is in the planning phase (faber-planner).

Parallel vs Serial

Parallel (default):

  • All managers spawn simultaneously
  • Faster overall execution
  • Independent failures don't block others

Serial (--serial):

  • One manager at a time
  • Useful for debugging
  • Useful when resources are limited

Resume Support (Exact-Step)

The --resume flag enables resuming execution from the exact step where it stopped:

How it works:

  1. Load state from logs/fractary/plugins/faber/runs/{plan_id}/items/{work_id}/state.json
  2. Extract current_phase and current_step_index
  3. Pass resume context to faber-manager
  4. Manager skips completed steps, continues from exact position

State tracking:

  • faber-manager writes state after each step completes
  • State includes plan_id for bidirectional linking
  • Resume reads this state to determine where to continue

Example resume flow:

# Original execution stopped at build step 2
state.json:
  current_phase: "build"
  current_step_index: 2
  steps_completed: ["generate-spec", "create-branch"]

# Resume execution
/fractary-faber:execute {plan_id} --resume

# Manager receives:
  is_resume: true
  resume_from: {phase: "build", step_index: 2, ...}

# Manager skips steps 0-1, starts at step 2

Worktree Cleanup

Worktrees are cleaned up automatically when:

  1. Item succeeded
  2. PR was merged
  3. Worktree path exists in plan

This uses the existing /repo:pr-merge --worktree-cleanup behavior.

Storage Structure

Plans: logs/fractary/plugins/faber/plans/{plan_id}.json

  • One file per plan
  • Created by faber-planner
  • Updated by faber-executor with execution results

Runs: logs/fractary/plugins/faber/runs/{plan_id}/

  • Directory per plan execution
  • Contains per-item state files: items/{work_id}/state.json
  • Contains aggregate state: aggregate.json

Lookup by issue:

  • Issue labels contain faber:plan={plan_id}
  • Find plan_id from issue → load state from runs/{plan_id}/items/{work_id}/

Integration

Invoked by:

  • /fractary-faber:execute command
  • /fractary-faber:run command (after plan creation)

Invokes:

  • faber-manager agent (via Task tool)
  • /repo:worktree-remove (for cleanup)

Known Limitations

  1. Concurrent Plan Updates: No file locking for plan updates. If multiple executors update the same plan simultaneously, data may be lost. Workaround: Use --serial mode or ensure only one executor runs per plan.

  2. Git Remote Parsing: Currently optimized for GitHub HTTPS URLs. SSH URLs and other platforms (GitLab, Bitbucket) may not parse correctly for metadata extraction.