| name | orchestrator |
| description | Autonomous orchestration with task graph generation, dependency management, multi-phase execution, and file-based state management. Includes state trigger utilities, marker management, phase cleanup, and idempotent operations. Use when building orchestrator agents that manage complex multi-phase workflows, delegate tasks to specialized subagents, implement TDD workflows, manage file-based state with completion markers, or create autonomous systems with memory-first planning and quality gates. |
| cache_enabled | true |
| cache_zones | zone_1_policies, zone_2_sub-agents, zone_3_skills |
| cache_control | ephemeral |
Framework Orchestrator
Implement the autonomous orchestrator pattern for managing complex, multi-phase workflows with task graph generation, dependency management, and file-based state triggers.
Recommended Agents
This skill is most relevant for:
- Primary:
autonomous_orchestrator- For managing complex task graphs and multi-phase execution - Secondary:
software-architect- For designing orchestrator system architectures - Secondary:
software-developer- For implementing orchestrator logic and task delegation
When to Use This Skill
Use this skill when:
- Building orchestrator agents that manage complex multi-phase workflows
- Implementing autonomous systems that delegate tasks to specialized subagents
- Creating dependency-based task graphs with parallel execution opportunities
- Designing systems with file-based state triggers and completion markers
- Implementing TDD-first workflows with quality gates
- Building memory-first planning systems (Reuse > Adapt > Generate)
- Creating autonomous multi-phase execution loops
Core Architecture
The autonomous orchestrator pattern consists of four operational phases executed in a continuous loop:
Phase 1: Task Ingestion & Planning
The orchestrator performs internal critique and refinement within thinking tags before generating any tasks.
Planning Checklist:
Phase Detection: Check for phase markers (e.g.,
{{TEMP_DIR}}/PHASE_1.complete). If none exist, start Phase 1. If one exists, plan the next phase.Objective Deconstruction: Break objectives into atomic, verifiable components.
Memory-First Analysis: Query memory for existing artifacts. Prioritize:
- Reuse: Use existing artifacts without modification
- Adapt: Modify existing artifacts for new requirements
- Generate: Create new artifacts only when necessary
Policy Integration: Load project policies for:
- Naming conventions
- Quality gate requirements
- TDD mandates
- Security constraints
Task Graph Design: Generate task sequences with:
- Clear dependencies between tasks
- Parallel execution opportunities identified
- Completion markers for each task
- Idempotent cleanup steps
Subagent Assignment: Map tasks to specialized subagents:
software-developer: Code implementationresearch-specialist: Information gatheringrefactor-assistant: Code quality improvementsdoc-writer: Documentation tasks (invoke Skill(doc-patterns) for quality metrics, citation standards, and anti-hallucination protocols)software-architect: System design decisionsmcp-server-architect: Integration architecture
Quality Gate Definition: Define programmatic verification for each task:
- Linting requirements
- Test coverage thresholds
- Security scan gates
- Performance benchmarks
Final Review: Confirm the plan is:
- Efficient (leverages memory-first planning)
- Robust (includes failure handling)
- TDD-compliant (test tasks precede implementation)
- Autonomous (uses file markers for state)
Phase 2: Task Execution
Delegation Pattern:
The orchestrator outputs task definitions using the Task tool to invoke specialized subagents.
Task Definition Structure:
<task>
<id>T1_IMPLEMENT_FEATURE</id>
<subagent>software-developer</subagent>
<dependencies>NONE</dependencies>
<instructions>
Implement the authentication feature with the following requirements:
- Use JWT tokens for session management
- Implement password hashing with bcrypt
- Add rate limiting for login attempts
Upon successful completion, create the marker file:
touch {{TEMP_DIR}}/T1_IMPLEMENT_FEATURE.complete
</instructions>
</task>
Execution Flow:
- Orchestrator delegates task via Task tool
- Subagent executes work following 3-Strike Failure Protocol
- Upon success, subagent creates completion marker
- Subagent terminates without reporting back
3-Strike Failure Protocol:
All subagents must implement:
- Attempt 1 (Retry): Retry with original approach
- Attempt 2 (Remedy): Try alternative approach or fix
- Attempt 3 (Halt): Create global failure marker and stop
Failure marker creation:
touch {{TEMP_DIR}}/GLOBAL_FAILURE.marker
Phase 3: Autonomous Continuation
File-Based State Triggers:
Tasks are triggered by the existence of prerequisite completion markers.
Idempotent Task Pattern:
Every dependent task begins by cleaning up its prerequisite marker:
<task>
<id>T2_WRITE_TESTS</id>
<subagent>software-developer</subagent>
<dependencies>T1_IMPLEMENT_FEATURE</dependencies>
<instructions>
# Step 1: Cleanup prerequisite marker (idempotent)
rm -f {{TEMP_DIR}}/T1_IMPLEMENT_FEATURE.complete
# Step 2: Execute task work
Write comprehensive tests for the authentication feature:
- Test successful login flow
- Test invalid credentials handling
- Test rate limiting behavior
# Step 3: Create completion marker
touch {{TEMP_DIR}}/T2_WRITE_TESTS.complete
</instructions>
</task>
Loop Structure:
The orchestrator continues delegating tasks in sequence, waiting for markers and triggering dependent tasks until the entire graph completes.
Phase 4: Phase Completion & Re-Invocation
Phase Cleanup Pattern:
The final task in every phase is a cleanup task:
<task>
<id>CLEANUP_PHASE_1</id>
<subagent>autonomous_orchestrator</subagent>
<dependencies>T2_WRITE_TESTS</dependencies>
<instructions>
# Step 1: Remove all task-level markers
rm -f {{TEMP_DIR}}/T1_IMPLEMENT_FEATURE.complete
rm -f {{TEMP_DIR}}/T2_WRITE_TESTS.complete
# Step 2: Create phase completion marker
touch {{TEMP_DIR}}/PHASE_1.complete
# Step 3: Verify no failure markers exist
if [ -f "{{TEMP_DIR}}/GLOBAL_FAILURE.marker" ]; then
echo "Phase 1 completed with failures - halting"
exit 1
fi
echo "Phase 1 completed successfully"
</instructions>
</task>
Autonomous Re-Invocation:
After phase completion, the orchestrator automatically re-invokes its entire system prompt to:
- Detect the new phase marker (e.g.,
PHASE_1.complete) - Define objectives for the next phase
- Plan and execute the next task graph
- Continue the cycle until all work is complete
Task Graph Design Patterns
Pattern 1: Sequential Dependencies
T1_RESEARCH → T2_DESIGN → T3_IMPLEMENT → T4_TEST → T5_DEPLOY
Each task depends on the previous task's completion marker.
Pattern 2: Parallel Execution
T1_SETUP
├─→ T2A_FRONTEND (parallel)
├─→ T2B_BACKEND (parallel)
└─→ T2C_DATABASE (parallel)
↓
T3_INTEGRATION (depends on all T2 tasks)
Multiple tasks can execute in parallel when they share the same dependency.
Pattern 3: TDD Workflow
T1_TEST_PLAN → T2_WRITE_TESTS → T3_IMPLEMENT → T4_RUN_TESTS → T5_REFACTOR
Test plan and test writing must precede implementation (TDD Mandate).
Pattern 4: Quality Gate Integration
T1_IMPLEMENT → T2_LINT → T3_SECURITY_SCAN → T4_TEST_COVERAGE → T5_APPROVE
Each quality gate depends on the previous step and must pass programmatically.
Memory-First Planning Strategy
Before generating any new artifacts, query memory and apply the decision tree:
Decision Tree:
- Query Memory: Search for existing artifacts related to the task
- Evaluate Match Quality:
- Exact Match (Reuse): Use artifact as-is, create symlink or reference
- Partial Match (Adapt): Copy and modify artifact for new requirements
- No Match (Generate): Create new artifact from scratch
- Document Decision: Log reuse/adapt/generate decision in task notes
Implementation Pattern:
<thinking>
Task: Implement user authentication
Memory Query Results:
- Found: auth_module_v1.py (JWT-based authentication)
- Found: auth_module_v2.py (OAuth-based authentication)
- Found: session_manager.py (session handling)
Decision:
- auth_module_v1.py matches requirements (JWT tokens) → REUSE
- session_manager.py needs rate limiting added → ADAPT
- Password reset flow not found → GENERATE
Strategy:
1. REUSE auth_module_v1.py directly
2. ADAPT session_manager.py to add rate limiting
3. GENERATE password_reset_flow.py from scratch
</thinking>
State Management Utilities
This skill includes comprehensive utilities for managing file-based state in orchestrator workflows. State management is the foundation of autonomous, multi-phase execution.
Overview
The orchestrator uses file-based markers for state management instead of in-memory state. This approach ensures:
- Persistence: State survives process restarts
- Observability: State is visible and debuggable
- Idempotency: Operations can be safely repeated
- Decoupling: Tasks don't need direct communication
- Simplicity: No complex state stores required
Core State Management Concepts
Files Are the Source of Truth:
All state is represented by zero-byte marker files in the temporary directory. If a marker exists, that state is true. This principle enables reliable, observable orchestration.
Idempotent Operations:
All state operations must be idempotent (safe to run multiple times):
# Creating markers (always succeeds)
touch "{{TEMP_DIR}}/T1.complete"
touch "{{TEMP_DIR}}/T1.complete" # Safe to repeat
# Deleting markers (always succeeds with -f flag)
rm -f "{{TEMP_DIR}}/T1.complete"
rm -f "{{TEMP_DIR}}/T1.complete" # Safe to repeat
Atomic Operations:
Use atomic file operations to prevent race conditions:
# ATOMIC: Single system call (preferred)
touch "{{TEMP_DIR}}/T1.complete"
# NON-ATOMIC: Multiple operations (avoid)
echo "" > "{{TEMP_DIR}}/T1.complete"
chmod 644 "{{TEMP_DIR}}/T1.complete"
Bundled State Management Scripts
Two powerful scripts are provided for state management operations:
cleanup_markers.sh
Bash script for cleaning up marker files with multiple modes.
Location: scripts/cleanup_markers.sh
Features:
- Clean task markers by pattern
- Complete phases (clean tasks, create phase marker)
- Clean all markers (nuclear option)
- Dry-run mode to preview operations
- Verbose output for debugging
- Verification after cleanup
Usage Examples:
# Clean all task markers
bash scripts/cleanup_markers.sh \
--task-pattern "T*" \
--temp-dir "/tmp/orchestrator"
# Complete phase 1 (clean tasks, create phase marker)
bash scripts/cleanup_markers.sh \
--phase 1 \
--temp-dir "/tmp/orchestrator"
# Clean everything
bash scripts/cleanup_markers.sh \
--all \
--temp-dir "/tmp/orchestrator"
# Dry run to see what would be deleted
bash scripts/cleanup_markers.sh \
--task-pattern "T*" \
--dry-run \
--verbose
Exit Codes:
0- Success1- Error (missing directory, verification failed)2- No markers found matching pattern
manage_state.py
Python script for comprehensive state management operations.
Location: scripts/manage_state.py
Features:
- List all markers with timestamps and details
- Create/delete completion markers
- Check task prerequisites
- Verify phase status
- Check for global failures
- Export state snapshots to JSON
- Colored output for readability
Usage Examples:
# List all markers
python scripts/manage_state.py list \
--temp-dir "/tmp/orchestrator" \
--verbose
# Create completion marker
python scripts/manage_state.py create-marker \
--task-id T1_IMPLEMENT \
--temp-dir "/tmp/orchestrator"
# Check prerequisites
python scripts/manage_state.py check-prerequisites \
--task-id T2_TEST \
--deps T1_IMPLEMENT \
--temp-dir "/tmp/orchestrator"
# Verify phase
python scripts/manage_state.py verify-phase \
--phase 1 \
--temp-dir "/tmp/orchestrator"
# Check for failure
python scripts/manage_state.py check-failure \
--temp-dir "/tmp/orchestrator"
# Export state snapshot
python scripts/manage_state.py export \
--output state-snapshot.json \
--temp-dir "/tmp/orchestrator"
Common State Management Workflows
Workflow 1: Phase Cleanup Task
Standard cleanup at end of phase:
#!/bin/bash
# Phase Cleanup Task
TEMP_DIR="${TEMP_DIR:-/tmp/orchestrator}"
PHASE_NUM="1"
# Step 1: Delete prerequisite markers (idempotent)
rm -f "${TEMP_DIR}/T1_IMPLEMENT.complete"
rm -f "${TEMP_DIR}/T2_WRITE_TESTS.complete"
# Step 2: Clean all task markers for this phase
rm -f "${TEMP_DIR}/T*.complete"
# Step 3: Verify no task markers remain
if ls "${TEMP_DIR}"/T*.complete >/dev/null 2>&1; then
echo "ERROR: Task markers still exist after cleanup"
exit 1
fi
# Step 4: Create phase completion marker
touch "${TEMP_DIR}/PHASE_${PHASE_NUM}.complete"
echo "Phase ${PHASE_NUM} cleanup completed successfully"
Workflow 2: Task with Prerequisites
Standard task with prerequisite checking and cleanup:
#!/bin/bash
# Task T2: Write Tests
# Dependencies: T1_IMPLEMENT
TEMP_DIR="${TEMP_DIR:-/tmp/orchestrator}"
TASK_ID="T2_WRITE_TESTS"
PREREQ="T1_IMPLEMENT"
# Step 1: Check for global failure
if [ -f "${TEMP_DIR}/GLOBAL_FAILURE.marker" ]; then
echo "Global failure detected. Halting execution."
exit 1
fi
# Step 2: Verify prerequisites
if [ ! -f "${TEMP_DIR}/${PREREQ}.complete" ]; then
echo "Prerequisite ${PREREQ} not complete"
exit 1
fi
# Step 3: Clean up prerequisite marker (idempotent)
rm -f "${TEMP_DIR}/${PREREQ}.complete"
# Step 4: Execute task work
echo "Writing tests..."
# ... actual test writing logic ...
# Step 5: Create completion marker
touch "${TEMP_DIR}/${TASK_ID}.complete"
echo "Task ${TASK_ID} completed successfully"
State Management Best Practices
- Always use idempotent deletion: Use
rm -ffor safe cleanup - Use atomic file operations:
touchis atomic;echo >is not - Verify critical operations: Always check marker creation succeeded
- Clean prerequisites first: Delete prerequisite markers at task start
- Check for global failure: Before starting any task, verify no failure marker exists
- Use consistent temp directory: Always use
{{TEMP_DIR}}variable - Include failure details: Write helpful context to failure markers
- Leverage bundled scripts: Use provided scripts for complex operations
- Export state for debugging: Create state snapshots when issues occur
Reference Documentation:
For detailed state management guidance, see:
references/marker-conventions.md- Marker file naming patterns and lifecyclereferences/state-management.md- State transition patterns and best practices
File Marker Patterns
Standard Marker Locations
All markers should use a consistent temporary directory:
TEMP_DIR="${TEMP_DIR:-/tmp/orchestrator}"
mkdir -p "$TEMP_DIR"
Marker Naming Conventions
- Task Markers:
{{TEMP_DIR}}/{{TASK_ID}}.complete - Phase Markers:
{{TEMP_DIR}}/PHASE_{{N}}.complete - Failure Markers:
{{TEMP_DIR}}/GLOBAL_FAILURE.marker - Quality Gate Markers:
{{TEMP_DIR}}/{{TASK_ID}}_{{GATE_NAME}}.passed
Marker Creation Pattern
# Create marker atomically
touch "{{TEMP_DIR}}/T1_IMPLEMENT_FEATURE.complete"
# Verify marker exists
if [ -f "{{TEMP_DIR}}/T1_IMPLEMENT_FEATURE.complete" ]; then
echo "Task T1 completed successfully"
fi
Marker Cleanup Pattern
# Idempotent cleanup (safe to run multiple times)
rm -f "{{TEMP_DIR}}/T1_IMPLEMENT_FEATURE.complete"
# Clean all task markers for a phase
rm -f "{{TEMP_DIR}}/T*.complete"
# Clean everything except phase markers
find "{{TEMP_DIR}}" -name "*.complete" -not -name "PHASE_*.complete" -delete
Subagent Task Delegation
Using the Task Tool
Delegate work to specialized subagents using the Task tool:
Task: Implement authentication feature
Subagent: software-developer
Instructions: [detailed task instructions]
The Task tool automatically:
- Invokes the specified subagent
- Provides task instructions in isolated context
- Returns control when subagent completes or fails
Task Instruction Template
<task>
<id>{{TASK_ID}}</id>
<subagent>{{SUBAGENT_NAME}}</subagent>
<dependencies>{{COMMA_SEPARATED_TASK_IDS}}</dependencies>
<instructions>
## Context
[Brief context about why this task exists]
## Objective
[Clear, measurable objective]
## Requirements
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]
## Constraints
- [Constraint 1]
- [Constraint 2]
## Quality Gates
- [Quality gate 1: e.g., "All tests must pass"]
- [Quality gate 2: e.g., "Linting must pass with zero warnings"]
## Completion Criteria
1. [Criterion 1]
2. [Criterion 2]
3. Create completion marker: touch {{TEMP_DIR}}/{{TASK_ID}}.complete
## Failure Protocol
If any quality gate fails after 3 attempts:
1. Document failure in {{TEMP_DIR}}/{{TASK_ID}}.failure.log
2. Create global failure marker: touch {{TEMP_DIR}}/GLOBAL_FAILURE.marker
3. Halt execution
</instructions>
</task>
Core Directives
1. TDD Mandate
For all code generation tasks, a test plan task must be generated and executed before implementation:
Phase 1: Test Planning
- T1_CREATE_TEST_PLAN → T2_REVIEW_TEST_PLAN
Phase 2: Implementation
- T3_WRITE_TESTS (depends on T2) → T4_IMPLEMENT (depends on T3) → T5_RUN_TESTS
2. 3-Strike Failure Protocol
All subagents must implement retry logic:
Attempt 1: Execute with original approach
↓ (on failure)
Attempt 2: Analyze error, try alternative approach
↓ (on failure)
Attempt 3: Document failure, create GLOBAL_FAILURE.marker, halt
3. Autonomous Multi-Phase Execution
The orchestrator must:
- Complete current phase tasks
- Create phase completion marker
- Re-invoke entire system prompt
- Detect new phase and continue planning
- Repeat until all objectives achieved
4. Quality Gate Definition
Every task must include programmatic verification:
Quality Gates for T1_IMPLEMENT_FEATURE:
1. Linting: Run eslint with zero errors
2. Tests: Run test suite with 100% pass rate
3. Coverage: Ensure test coverage >= 80%
4. Security: Run security scanner with zero critical issues
Advanced Patterns
Multi-Specialist Consultation
For complex decisions requiring >2 specialist perspectives, invoke Skill(multi-specialist-discussion) for structured parallel collaboration:
When to Use:
- Architecture decisions needing multiple domain experts
- Research requiring consensus from specialists
- Quality assessments needing independent evaluations
- Complex topics requiring 3+ specialist perspectives
Pattern:
# Instead of manual coordination:
Invoke Skill(multi-specialist-discussion) with topic and required specialists
# DIS (Discussion Coordinator) will:
1. Spawn specialists in parallel using Task tool
2. Collect findings with confidence scores
3. Detect consensus and conflicts
4. Synthesize validated recommendations
For complete multi-specialist orchestration patterns, see Skill(multi-specialist-discussion).
Manual Coordination Alternative (when DIS not available):
For complex decisions, gather input from multiple subagents in parallel:
<task>
<id>T1_GATHER_DESIGN_INPUT</id>
<subagent>autonomous_orchestrator</subagent>
<dependencies>NONE</dependencies>
<instructions>
Launch parallel consultation tasks:
1. Delegate to agent-architect: "Provide system design recommendations"
2. Delegate to software-developer: "Provide implementation complexity assessment"
3. Delegate to research-specialist: "Research best practices and existing solutions"
Wait for all three completion markers:
- {{TEMP_DIR}}/T1A_ARCHITECTURE_INPUT.complete
- {{TEMP_DIR}}/T1B_IMPLEMENTATION_INPUT.complete
- {{TEMP_DIR}}/T1C_RESEARCH_INPUT.complete
Then synthesize all inputs and proceed to T2_DESIGN_DECISION.
</instructions>
</task>
Conditional Task Graphs
Adapt task graphs based on runtime conditions:
<thinking>
Check if tests exist:
- If tests found → Skip T2_WRITE_TESTS, go directly to T3_RUN_TESTS
- If tests missing → Execute T2_WRITE_TESTS first
This implements ADAPT strategy from memory-first planning.
</thinking>
Nested Orchestration
Orchestrators can delegate to other orchestrators for sub-workflows:
<task>
<id>T5_DEPLOY_SUBSYSTEM</id>
<subagent>deployment-orchestrator</subagent>
<dependencies>T4_BUILD</dependencies>
<instructions>
Deploy the built application using your deployment workflow:
1. Run pre-deployment checks
2. Execute blue-green deployment
3. Run post-deployment verification
4. Create completion marker when entire deployment succeeds
</instructions>
</task>
Workflow Example
Complete example of a 2-phase autonomous workflow:
Phase 1: Feature Implementation
<thinking>
Objective: Implement user authentication feature
Phase Detection: No markers found → Starting Phase 1
Memory-First Analysis:
- Query: "authentication modules"
- Found: jwt_auth.py (reusable)
- Found: session_manager.py (needs adaptation)
- Missing: rate limiting implementation (must generate)
Task Graph for Phase 1:
T1_TEST_PLAN → T2_WRITE_TESTS → T3_IMPLEMENT → T4_RUN_TESTS → T5_CLEANUP
</thinking>
Phase 1 Task Graph:
<task>
<id>T1_TEST_PLAN</id>
<subagent>software-developer</subagent>
<dependencies>NONE</dependencies>
<instructions>
Create comprehensive test plan for authentication feature.
Include test cases for: login, logout, session management, rate limiting.
Upon completion: touch {{TEMP_DIR}}/T1_TEST_PLAN.complete
</instructions>
</task>
<task>
<id>T2_WRITE_TESTS</id>
<subagent>software-developer</subagent>
<dependencies>T1_TEST_PLAN</dependencies>
<instructions>
rm -f {{TEMP_DIR}}/T1_TEST_PLAN.complete
Implement tests based on test plan.
Upon completion: touch {{TEMP_DIR}}/T2_WRITE_TESTS.complete
</instructions>
</task>
<task>
<id>T3_IMPLEMENT</id>
<subagent>software-developer</subagent>
<dependencies>T2_WRITE_TESTS</dependencies>
<instructions>
rm -f {{TEMP_DIR}}/T2_WRITE_TESTS.complete
Implement authentication feature.
REUSE: jwt_auth.py from memory
ADAPT: session_manager.py to add rate limiting
GENERATE: password_reset_flow.py
Upon completion: touch {{TEMP_DIR}}/T3_IMPLEMENT.complete
</instructions>
</task>
<task>
<id>T4_RUN_TESTS</id>
<subagent>software-developer</subagent>
<dependencies>T3_IMPLEMENT</dependencies>
<instructions>
rm -f {{TEMP_DIR}}/T3_IMPLEMENT.complete
Run test suite and verify 100% pass rate.
Quality Gates: All tests pass, coverage >= 80%
Upon completion: touch {{TEMP_DIR}}/T4_RUN_TESTS.complete
</instructions>
</task>
<task>
<id>T5_CLEANUP</id>
<subagent>autonomous_orchestrator</subagent>
<dependencies>T4_RUN_TESTS</dependencies>
<instructions>
rm -f {{TEMP_DIR}}/T*.complete
touch {{TEMP_DIR}}/PHASE_1.complete
echo "Phase 1 complete - re-invoking for Phase 2"
</instructions>
</task>
Phase 2: Documentation & Deployment
After Phase 1 completes, the orchestrator re-invokes and detects PHASE_1.complete:
<thinking>
Phase Detection: Found PHASE_1.complete → Starting Phase 2
Objective: Document and deploy authentication feature
Task Graph for Phase 2:
T6_WRITE_DOCS → T7_SECURITY_SCAN → T8_DEPLOY → T9_CLEANUP
</thinking>
Phase 2 Task Graph:
<task>
<id>T6_WRITE_DOCS</id>
<subagent>doc-writer</subagent>
<dependencies>NONE</dependencies>
<instructions>
rm -f {{TEMP_DIR}}/PHASE_1.complete
Document authentication feature API and usage.
Upon completion: touch {{TEMP_DIR}}/T6_WRITE_DOCS.complete
</instructions>
</task>
[... T7, T8, T9 tasks follow same pattern ...]
<task>
<id>T9_CLEANUP</id>
<subagent>autonomous_orchestrator</subagent>
<dependencies>T8_DEPLOY</dependencies>
<instructions>
rm -f {{TEMP_DIR}}/T*.complete
touch {{TEMP_DIR}}/PHASE_2.complete
echo "All phases complete - objective achieved"
</instructions>
</task>
Reference Documentation:
For detailed information on specific components, see the userscope_manual:
- Chapter 1: Executive Summary - Framework overview
- Chapter 2: Agent Specifications - Orchestrator and subagent roles
- Chapter 3: Operational Procedures - Detailed 4-phase workflow
- Chapter 13: Agentic Design Principles - Core design philosophy
Memory-First Planning Integration
The orchestrator implements a Reuse > Adapt > Generate strategy during Phase 1: Task Ingestion & Planning. This section provides integration patterns with the memory system.
Semantic Similarity Thresholds
Full Reuse (≥0.90 similarity):
- Existing artifact fully satisfies requirement
- Action: Load and verify artifact, skip generation
- Example: "Implement user authentication" matching existing JWT auth module
Partial Adaptation (0.731-0.89 similarity):
- Partially relevant artifact found
- Action: Load artifact, document required modifications, apply changes
- Example: "Implement social login" adapting existing auth module for OAuth
Generate New (< 0.731 similarity):
- No relevant artifact exists
- Action: Create new solution following project patterns
- Example: "Implement blockchain verification" creating entirely new subsystem
Memory Query Pattern
{
"query": "user authentication with JWT tokens",
"min_similarity": 0.731,
"artifact_types": ["module", "component", "service"],
"project_scope": "current_project",
"results": [
{
"artifact": "auth_module.js",
"similarity": 0.92,
"strategy": "REUSE",
"path": "/src/auth/authentication.js"
},
{
"artifact": "oauth_adapter.js",
"similarity": 0.81,
"strategy": "ADAPT",
"path": "/src/integrations/oauth.js",
"required_changes": ["Add JWT wrapping", "Update session storage"]
}
]
}
Integration with Phase 1
- Before Task Graph Design: Query memory for each major component
- Document Strategy: Record Reuse/Adapt/Generate decision for each task
- Pass Context: Include artifact references in subagent instructions
- Update Memory: Save completed artifacts for future reuse
For complete semantic search patterns, see Skill(memory-management).
Task Graph Design Quick Reference
The orchestrator generates task graphs with explicit dependencies, resources, and verification gates.
Core Elements
Task Definition:
<task>
<id>T{{N}}_{{DESCRIPTION}}</id>
<type>{{TASK_TYPE}}</type>
<subagent>{{SUBAGENT_NAME}}</subagent>
<dependencies>{{PREREQUISITE_TASKS}}</dependencies>
<priority>{{P0|P1|P2}}</priority>
<verification>{{GATE_COMMAND}}</verification>
</task>
Task Types:
ingestion: Gather requirements and contextimplementation: Create new artifactsverification: Test and validatecleanup: Remove temporary statedeployment: Release to production
Priority Levels:
- P0: Critical, blocks other tasks
- P1: Important, required for completion
- P2: Enhancement, optional
Dependency Patterns
Sequential (Linear Chain):
T0 → T1 → T2 → T3 → Cleanup
Parallel (Fan-Out):
T0 → [T1A || T1B || T1C] → T2
Complex Dependencies:
T0
├─ T1A → T2A ┐
├─ T1B → T2B ├─ T3 → Cleanup
└─ T1C ──────┘
Verification Gate Template
#!/bin/bash
TARGET_FILE="${1:?File required}"
MODULE="${2:?Module required}"
# Linting
{{LINTER_COMMAND}} "$TARGET_FILE" 2>lint.stderr || exit 1
# Testing
{{TEST_COMMAND}} --cov="$MODULE" --cov-fail-under=95 2>test.stderr || exit 1
# Quality gates
{{QG_CALCULATOR}} "$TARGET_FILE" --fail-under=90 2>qg.stderr || exit 1
# Security
{{SECURITY_SCANNER}} -r "$TARGET_FILE" 2>security.stderr || exit 1
exit 0
For comprehensive task graph methodology, see Task Graph Design Reference.
Advanced State Management
Marker Lifecycle
Creation:
# Task completion marker
touch {{TEMP_DIR}}/{{TASK_ID}}.complete
# Phase completion marker
touch {{TEMP_DIR}}/PHASE_{{N}}.complete
# Global failure marker
echo "{{FAILURE_REASON}}" > {{TEMP_DIR}}/GLOBAL_FAILURE.marker
Cleanup:
# Idempotent task marker cleanup (safe even if missing)
rm -f {{TEMP_DIR}}/{{TASK_ID}}.complete
# Phase cleanup (remove all task markers)
rm -f {{TEMP_DIR}}/T*.complete
# Verify cleanup
if [ -z "$(ls -A {{TEMP_DIR}}/T*.complete 2>/dev/null)" ]; then
echo "All task markers cleaned"
fi
Verification:
# Check if task completed
test -f {{TEMP_DIR}}/{{TASK_ID}}.complete && echo "Task completed"
# Check if phase completed
test -f {{TEMP_DIR}}/PHASE_{{N}}.complete && echo "Phase completed"
# Check for global failure
test -f {{TEMP_DIR}}/GLOBAL_FAILURE.marker && cat {{TEMP_DIR}}/GLOBAL_FAILURE.marker
State Transition Rules
- Before Task Starts: Clean prerequisite marker (idempotent)
- During Execution: Task works independently
- On Success: Create completion marker atomically
- On Failure: Create global failure marker, halt new tasks
- Phase Transition: Create phase marker, trigger orchestrator re-invocation
Atomic Operations
# Atomic marker creation (all-or-nothing)
temp_marker="$(mktemp)"
echo "Task metadata" > "$temp_marker"
mv "$temp_marker" "{{TEMP_DIR}}/{{TASK_ID}}.complete" || exit 1
# Atomic marker verification (check exists)
[ -f "{{TEMP_DIR}}/{{TASK_ID}}.complete" ] || exit 1
# Atomic cleanup (idempotent, no error if missing)
rm -f "{{TEMP_DIR}}/{{TASK_ID}}.complete"
State Inspection
# Current phase
PHASE=$(ls {{TEMP_DIR}}/PHASE_*.complete 2>/dev/null | sed 's/.*PHASE_//' | sed 's/.complete//' | sort -n | tail -1)
echo "Current phase: $PHASE"
# Completed tasks
echo "Completed tasks:"
ls -1 {{TEMP_DIR}}/T*.complete 2>/dev/null | sed 's/.*T/T/' | sed 's/.complete//'
# Failed tasks
if [ -f "{{TEMP_DIR}}/GLOBAL_FAILURE.marker" ]; then
echo "Global failure:"
cat {{TEMP_DIR}}/GLOBAL_FAILURE.marker
fi
For complete state management patterns and marker conventions, see:
Best Practices
- Always start with thinking tags: Plan task graphs internally before delegating
- Use file markers for all state: Never rely on in-memory state between phases
- Make cleanup idempotent: Use
rm -fto safely clean markers - Define quality gates programmatically: Scripts, not manual checks
- Document memory-first decisions: Log reuse/adapt/generate strategy
- Implement 3-strike protocol: Retry, remedy, halt - never infinite loops
- Create atomic tasks: Each task should have one clear, verifiable objective
- Leverage parallel execution: Identify independent tasks that can run concurrently
- Use consistent marker naming: Follow
{{TEMP_DIR}}/{{TASK_ID}}.completepattern - Re-invoke after every phase: Autonomous continuation requires explicit re-invocation
Common Pitfalls
- Forgetting marker cleanup: Always
rm -fprerequisite markers at task start - Missing TDD tasks: Test plan and tests must precede implementation
- Incomplete quality gates: Define programmatic verification for every task
- Skipping memory queries: Always check for existing artifacts first
- Non-atomic tasks: Tasks that try to do too much become hard to verify
- Missing failure markers: Global failure marker prevents zombie tasks
- Inconsistent temp directory: Always use
{{TEMP_DIR}}variable consistently - Forgetting re-invocation: Phases won't continue without explicit re-invocation
Troubleshooting
Tasks not triggering
- Verify prerequisite markers exist:
ls -la {{TEMP_DIR}}/ - Check marker naming matches task dependencies
- Ensure cleanup happened (idempotent
rm -fat task start)
Infinite loops
- Check for missing completion markers in task instructions
- Verify 3-strike failure protocol is implemented
- Look for global failure marker:
{{TEMP_DIR}}/GLOBAL_FAILURE.marker
Phase not advancing
- Verify phase completion marker created:
{{TEMP_DIR}}/PHASE_N.complete - Check that re-invocation is happening after cleanup task
- Ensure phase detection logic is correct
Quality gates failing
- Verify gates are programmatic (scripts, not manual checks)
- Check gate definitions are specific and measurable
- Review failure logs:
{{TEMP_DIR}}/{{TASK_ID}}.failure.log