| name | dev-cycle |
| description | Main orchestrator for the 6-gate development cycle system. Loads tasks/subtasks from PM team output and executes through implementation, devops, SRE, testing, review, and validation gates with state persistence and metrics collection. |
| trigger | - Starting a new development cycle with a task file - Resuming an interrupted development cycle (--resume flag) - Need structured, gate-based task execution with quality checkpoints |
| prerequisite | - Tasks file exists with structured subtasks - Not already in a specific gate skill execution - Human has not explicitly requested manual workflow |
| NOT_skip_when | - "Task is simple" → Simple ≠ risk-free. Execute gates. - "Tests already pass" → Tests ≠ review. Different concerns. - "Time pressure" → Pressure ≠ permission. Document and proceed. - "Already did N gates" → Sunk cost is irrelevant. Complete all gates. |
| sequence | [object Object] |
| related | [object Object] |
| verification | [object Object] |
| examples | [object Object], [object Object], [object Object] |
Development Cycle Orchestrator
Standards Loading (MANDATORY)
Before ANY gate execution, you MUST load Ring standards:
See CLAUDE.md as the canonical source. This table summarizes the loading process.
| Parameter | Value |
|---|---|
| url | `https://raw.githubusercontent.com/LerianStudio/ring/main/CLAUDE.md\` |
| prompt | "Extract Agent Modification Verification requirements, Anti-Rationalization Tables requirements, and Critical Rules" |
Execute WebFetch before proceeding. Do NOT continue until standards are loaded.
If WebFetch fails → STOP and report blocker. Cannot proceed without Ring standards.
Overview
The development cycle orchestrator loads tasks/subtasks from PM team output (or manual task files) and executes through 6 quality gates. Tasks are loaded at initialization - no separate import gate.
Announce at start: "I'm using the dev-cycle skill to orchestrate task execution through 6 gates."
⛔ CRITICAL: Specialized Agents Perform All Tasks
See shared-patterns/shared-orchestrator-principle.md for full ORCHESTRATOR principle, role separation, forbidden/required actions, gate-to-agent mapping, and anti-rationalization table.
Summary: You orchestrate. Agents execute. If using Read/Write/Edit/Bash on source code → STOP. Dispatch agent.
⛔ ORCHESTRATOR BOUNDARIES (HARD GATE)
This section defines exactly what the orchestrator CAN and CANNOT do.
What Orchestrator CAN Do (PERMITTED)
| Action | Tool | Purpose |
|---|---|---|
| Read task files | Read |
Load task definitions from docs/pre-dev/*/tasks.md or docs/refactor/*/tasks.md |
| Read state files | Read |
Load/verify docs/dev-cycle/current-cycle.json or docs/dev-refactor/current-cycle.json |
| Read PROJECT_RULES.md | Read |
Load project-specific rules |
| Write state files | Write |
Persist cycle state to JSON |
| Track progress | TodoWrite |
Maintain task list |
| Dispatch agents | Task |
Send work to specialist agents |
| Ask user questions | AskUserQuestion |
Get execution mode, approvals |
| WebFetch standards | WebFetch |
Load Ring standards |
What Orchestrator CANNOT Do (FORBIDDEN)
| Action | Tool | Why FORBIDDEN |
|---|---|---|
| Read source code | Read on *.go, *.ts, *.tsx |
Agent reads code, not orchestrator |
| Write source code | Write/Create on *.go, *.ts |
Agent writes code, not orchestrator |
| Edit source code | Edit on *.go, *.ts, *.tsx |
Agent edits code, not orchestrator |
| Run tests | Execute with go test, npm test |
Agent runs tests in TDD cycle |
| Analyze code | Direct pattern analysis | codebase-explorer analyzes |
| Make architectural decisions | Choosing patterns/libraries | User decides, agent implements |
The 3-FILE RULE
If a task requires editing MORE than 3 files → MUST dispatch specialist agent.
This is NOT negotiable:
- 1-3 files of non-source content (markdown, json, yaml) → Orchestrator MAY edit directly
- 1+ source code files (
*.go,*.ts,*.tsx) → MUST dispatch agent - 4+ files of ANY type → MUST dispatch agent
Orchestrator Workflow Order (MANDATORY)
┌─────────────────────────────────────────────────────────────────┐
│ CORRECT WORKFLOW ORDER │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Load task file (Read docs/pre-dev/*/tasks.md or docs/refactor/*/tasks.md) │
│ 2. Ask execution mode (AskUserQuestion) │
│ 3. Determine state path + Check/Load state (see State Path Selection) │
│ 4. WebFetch Ring Standards │
│ 5. ⛔ LOAD SUB-SKILL for current gate (Skill tool) │
│ 6. Execute sub-skill instructions (dispatch agent via Task) │
│ 7. Wait for agent completion │
│ 8. Verify agent output (Standards Coverage Table) │
│ 9. Update state (Write to JSON) │
│ 10. Proceed to next gate │
│ │
│ ════════════════════════════════════════════════════════════ │
│ ❌ WRONG: Load → Mode → Standards → Task(agent) directly │
│ ✅ RIGHT: Load → Mode → Standards → Skill(sub) → Task(agent) │
│ ════════════════════════════════════════════════════════════ │
└─────────────────────────────────────────────────────────────────┘
⛔ SUB-SKILL LOADING IS MANDATORY (HARD GATE)
Before dispatching ANY agent, you MUST load the corresponding sub-skill first.
| Gate | Sub-Skill to Load | Then Dispatch Agent |
|---|---|---|
| Gate 0 | Skill("dev-implementation") |
Task(subagent_type="backend-engineer-*", ...) |
| Gate 1 | Skill("dev-devops") |
Task(subagent_type="devops-engineer", ...) |
| Gate 2 | Skill("dev-sre") |
Task(subagent_type="sre", ...) |
| Gate 3 | Skill("dev-testing") |
Task(subagent_type="qa-analyst", ...) |
| Gate 4 | Skill("requesting-code-review") |
3x Task(...) in parallel |
| Gate 5 | Skill("dev-validation") |
N/A (verification only) |
The workflow for EACH gate is:
1. Skill("[sub-skill-name]") ← Load sub-skill instructions
2. Follow sub-skill instructions ← Sub-skill tells you HOW to dispatch
3. Task(subagent_type=...) ← Dispatch agent as sub-skill instructs
4. Validate agent output ← Per sub-skill validation rules
5. Update state ← Record results
Anti-Rationalization for Skipping Sub-Skills
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I know what the sub-skill does" | Knowledge ≠ execution. Sub-skill has iteration logic. | Load Skill() first |
| "Task() directly is faster" | Faster ≠ correct. Sub-skill has validation rules. | Load Skill() first |
| "Sub-skill just wraps Task()" | Sub-skills have retry logic, fix dispatch, validation. | Load Skill() first |
| "I'll follow the pattern manually" | Manual = error-prone. Sub-skill is the pattern. | Load Skill() first |
Between "WebFetch standards" and "Task(agent)" there MUST be "Skill(sub-skill)".
Anti-Rationalization for Direct Coding
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "It's just one small file" | File count doesn't determine agent need. Language does. | DISPATCH specialist agent |
| "I already loaded the standards" | Loading standards ≠ permission to implement. Standards are for AGENTS. | DISPATCH specialist agent |
| "Agent dispatch adds overhead" | Overhead ensures compliance. Skip = skip verification. | DISPATCH specialist agent |
| "I can write Go/TypeScript" | Knowing language ≠ having Ring standards loaded. Agent has them. | DISPATCH specialist agent |
| "Just a quick fix" | "Quick" is irrelevant. ALL source changes require specialist. | DISPATCH specialist agent |
| "I'll read the file first to understand" | Reading source → temptation to edit. Agent reads for you. | DISPATCH specialist agent |
| "Let me check if tests pass first" | Agent runs tests in TDD cycle. You don't run tests. | DISPATCH specialist agent |
Red Flags - Orchestrator Violation in Progress
If you catch yourself doing ANY of these, STOP IMMEDIATELY:
🚨 RED FLAG: About to Read *.go or *.ts file
→ STOP. Dispatch agent instead.
🚨 RED FLAG: About to Write/Create source code
→ STOP. Dispatch agent instead.
🚨 RED FLAG: About to Edit source code
→ STOP. Dispatch agent instead.
🚨 RED FLAG: About to run "go test" or "npm test"
→ STOP. Agent runs tests, not you.
🚨 RED FLAG: Thinking "I'll just..."
→ STOP. "Just" is the warning word. Dispatch agent.
🚨 RED FLAG: Thinking "This is simple enough..."
→ STOP. Simplicity is irrelevant. Dispatch agent.
🚨 RED FLAG: Standards loaded, but next action is NOT Task tool
→ STOP. After standards, IMMEDIATELY dispatch agent.
Recovery from Orchestrator Violation
If you violated orchestrator boundaries:
- STOP current execution immediately
- DISCARD any direct changes (
git checkout -- .) - DISPATCH the correct specialist agent
- Agent implements from scratch following TDD
- Document the violation for feedback loop
Sunk cost of direct work is IRRELEVANT. Agent dispatch is MANDATORY.
Blocker Criteria - STOP and Report
| Decision Type | Examples | Action |
|---|---|---|
| Gate Failure | Tests not passing, review failed | STOP. Cannot proceed to next gate. |
| Missing Standards | No PROJECT_RULES.md | STOP. Report blocker and wait. |
| Agent Failure | Specialist agent returned errors | STOP. Diagnose and report. |
| User Decision Required | Architecture choice, framework selection | STOP. Present options with trade-offs. |
You CANNOT proceed when blocked. Report and wait for resolution.
Cannot Be Overridden
| Requirement | Rationale | Consequence If Skipped |
|---|---|---|
| All 6 gates must execute | Each gate catches different issues | Missing critical defects, security vulnerabilities |
| Gates execute in order (0→5) | Dependencies exist between gates | Testing untested code, reviewing unobservable systems |
| Gate 4 requires ALL 3 reviewers | Different review perspectives are complementary | Missing security issues, business logic flaws |
| Coverage threshold ≥ 85% | Industry standard for quality code | Untested edge cases, regression risks |
| PROJECT_RULES.md must exist | Cannot verify standards without target | Arbitrary decisions, inconsistent implementations |
Severity Calibration
| Severity | Criteria | Examples |
|---|---|---|
| CRITICAL | Blocks deployment, security risk, data loss | Gate violation, skipped mandatory step |
| HIGH | Major functionality broken, standards violation | Missing tests, wrong agent dispatched |
| MEDIUM | Code quality, maintainability issues | Incomplete documentation, minor gaps |
| LOW | Best practices, optimization | Style improvements, minor refactoring |
Report ALL severities. Let user prioritize.
Reviewer Verdicts Are Final
MEDIUM issues found in Gate 4 MUST be fixed. No exceptions.
| Request | Why It's WRONG | Required Action |
|---|---|---|
| "Can reviewer clarify if MEDIUM can defer?" | Reviewer already decided. MEDIUM means FIX. | Fix the issue, re-run reviewers |
| "Ask if this specific case is different" | Reviewer verdict accounts for context already. | Fix the issue, re-run reviewers |
| "Request exception for business reasons" | Reviewers know business context. Verdict is final. | Fix the issue, re-run reviewers |
Severity mapping is absolute:
- CRITICAL/HIGH/MEDIUM → Fix NOW, re-run all 3 reviewers
- LOW → Add TODO(review): comment
- Cosmetic → Add FIXME(nitpick): comment
No negotiation. No exceptions. No "special cases".
Pressure Resistance
See shared-patterns/shared-pressure-resistance.md for universal pressure scenarios.
Gate-specific note: Execution mode selection affects CHECKPOINTS (user approval pauses), not GATES (quality checks). ALL gates execute regardless of mode.
Common Rationalizations - REJECTED
See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations.
Gate-specific rationalizations:
| Excuse | Reality |
|---|---|
| "Automatic mode means faster" | Automatic mode skips CHECKPOINTS, not GATES. Same quality, less interruption. |
| "Automatic mode will skip review" | Automatic mode affects user approval pauses, NOT quality gates. ALL gates execute regardless. |
| "Defense in depth exists (frontend validates)" | Frontend can be bypassed. Backend is the last line. Fix at source. |
| "Backlog the Medium issue, it's documented" | Documented risk ≠ mitigated risk. Medium in Gate 4 = fix NOW, not later. |
| "Risk-based prioritization allows deferral" | Gates ARE the risk-based system. Reviewers define severity, not you. |
Red Flags - STOP
See shared-patterns/shared-red-flags.md for universal red flags.
If you catch yourself thinking ANY of those patterns, STOP immediately and return to gate execution.
Incremental Compromise Prevention
The "just this once" pattern leads to complete gate erosion:
Day 1: "Skip review just this once" → Approved (precedent set)
Day 2: "Skip testing, we did it last time" → Approved (precedent extended)
Day 3: "Skip implementation checks, pattern established" → Approved (gates meaningless)
Day 4: Production incident from Day 1 code
Prevention rules:
- No incremental exceptions - Each exception becomes the new baseline
- Document every pressure - Log who requested, why, outcome
- Escalate patterns - If same pressure repeats, escalate to team lead
- Gates are binary - Complete or incomplete. No "mostly done".
Gate Completion Definition (HARD GATE)
A gate is COMPLETE only when ALL components finish successfully:
| Gate | Components Required | Partial = FAIL |
|---|---|---|
| 0.1 | TDD-RED: Failing test written + failure output captured | Test exists but no failure output = FAIL |
| 0.2 | TDD-GREEN: Implementation passes test | Code exists but test fails = FAIL |
| 0 | Both 0.1 and 0.2 complete | 0.1 done without 0.2 = FAIL |
| 1 | Dockerfile + docker-compose + .env.example | Missing any = FAIL |
| 2 | Structured JSON logs with trace correlation | Partial structured logs = FAIL |
| 3 | Coverage ≥ 85% + all AC tested | 84% = FAIL |
| 4 | ALL 3 reviewers PASS | 2/3 reviewers = FAIL |
| 5 | Explicit "APPROVED" from user | "Looks good" = NOT approved |
CRITICAL for Gate 4: Running 2 of 3 reviewers is NOT a partial pass - it's a FAIL. Re-run ALL 3 reviewers.
Anti-Rationalization for Partial Gates:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "2 of 3 reviewers passed" | Gate 4 requires ALL 3. 2/3 = 0/3. | Re-run ALL 3 reviewers |
| "Gate mostly complete" | Mostly ≠ complete. Binary: done or not done. | Complete ALL components |
| "Can finish remaining in next cycle" | Gates don't carry over. Complete NOW. | Finish current gate |
| "Core components done, optional can wait" | No component is optional within a gate. | Complete ALL components |
Gate Order Enforcement (HARD GATE)
Gates MUST execute in order: 0 → 1 → 2 → 3 → 4 → 5. No exceptions.
| Violation | Why It's WRONG | Consequence |
|---|---|---|
| Skip Gate 1 (DevOps) | "No infra changes" | Code without container = works on my machine only |
| Skip Gate 2 (SRE) | "Observability later" | Blind production = debugging nightmare |
| Reorder Gates | "Review before test" | Reviewing untested code wastes reviewer time |
| Parallel Gates | "Run 2 and 3 together" | Dependencies exist. Order is intentional. |
Gates are NOT parallelizable across different gates. Sequential execution is MANDATORY.
The 6 Gates
| Gate | Skill | Purpose | Agent |
|---|---|---|---|
| 0 | dev-implementation | Write code following TDD | Based on task language/domain |
| 1 | dev-devops | Infrastructure and deployment | devops-engineer |
| 2 | dev-sre | Observability (health, logging, tracing) | sre |
| 3 | dev-testing | Unit tests for acceptance criteria | qa-analyst |
| 4 | requesting-code-review | Parallel code review | code-reviewer, business-logic-reviewer, security-reviewer (3x parallel) |
| 5 | dev-validation | Final acceptance validation | N/A (verification) |
Integrated PM → Dev Workflow
PM Team Output → Dev Team Execution (/dev-cycle)
| Input Type | Path | Structure |
|---|---|---|
| Tasks only | docs/pre-dev/{feature}/tasks.md |
T-001, T-002, T-003 with requirements + acceptance criteria |
| Tasks + Subtasks | docs/pre-dev/{feature}/ |
tasks.md + subtasks/{task-id}/ST-XXX-01.md, ST-XXX-02.md... |
Execution Order
Core Principle: Each execution unit (task OR subtask) passes through all 6 gates before the next unit.
Flow: Unit → Gate 0-5 → 🔒 Unit Checkpoint (Step 7.1) → 🔒 Task Checkpoint (Step 7.2) → Next Unit
| Scenario | Execution Unit | Gates Per Unit |
|---|---|---|
| Task without subtasks | Task itself | 6 gates |
| Task with subtasks | Each subtask | 6 gates per subtask |
Commit Timing
User selects when commits happen (Step 7 of initialization).
| Option | When Commit Happens | Use Case |
|---|---|---|
| (a) Per subtask | After each subtask passes Gate 5 | Fine-grained history, easy rollback per subtask |
| (b) Per task | After all subtasks of a task complete | Logical grouping, one commit per feature chunk |
| (c) At the end | After entire cycle completes | Single commit with all changes, clean history |
Commit Message Format
| Timing | Message Format | Example |
|---|---|---|
| Per subtask | feat({subtask_id}): {subtask_title} |
feat(ST-001-02): implement user authentication handler |
| Per task | feat({task_id}): {task_title} |
feat(T-001): implement user authentication |
| At the end | feat({cycle_id}): complete dev cycle for {feature} |
feat(cycle-abc123): complete dev cycle for auth-system |
Commit Timing vs Execution Mode
| Execution Mode | Commit Timing | Behavior |
|---|---|---|
| Manual per subtask | Per subtask | Commit + checkpoint after each subtask |
| Manual per subtask | Per task | Checkpoint after subtask, commit after task |
| Manual per subtask | At end | Checkpoint after subtask, commit at cycle end |
| Manual per task | Per subtask | Commit after subtask, checkpoint after task |
| Manual per task | Per task | Commit + checkpoint after task |
| Manual per task | At end | Checkpoint after task, commit at cycle end |
| Automatic | Per subtask | Commit after each subtask, no checkpoints |
| Automatic | Per task | Commit after task, no checkpoints |
| Automatic | At end | Single commit at cycle end, no checkpoints |
Note: Checkpoints (user approval pauses) are controlled by execution_mode. Commits are controlled by commit_timing. They are independent settings.
State Management
State Path Selection (MANDATORY)
The state file path depends on the source of tasks:
| Task Source | State Path | Use Case |
|---|---|---|
docs/refactor/*/tasks.md |
docs/dev-refactor/current-cycle.json |
Refactoring existing code |
docs/pre-dev/*/tasks.md |
docs/dev-cycle/current-cycle.json |
New feature development |
| Any other path | docs/dev-cycle/current-cycle.json |
Default for manual tasks |
Detection Logic:
IF source_file contains "docs/refactor/" THEN
state_path = "docs/dev-refactor/current-cycle.json"
ELSE
state_path = "docs/dev-cycle/current-cycle.json"
Store state_path in the state object itself so resume knows where to look.
State File Structure
State is persisted to {state_path} (either docs/dev-cycle/current-cycle.json or docs/dev-refactor/current-cycle.json):
{
"version": "1.0.0",
"cycle_id": "uuid",
"started_at": "ISO timestamp",
"updated_at": "ISO timestamp",
"source_file": "path/to/tasks.md",
"state_path": "docs/dev-cycle/current-cycle.json | docs/dev-refactor/current-cycle.json",
"cycle_type": "feature | refactor",
"execution_mode": "manual_per_subtask|manual_per_task|automatic",
"commit_timing": "per_subtask|per_task|at_end",
"status": "in_progress|completed|failed|paused|paused_for_approval|paused_for_testing|paused_for_task_approval|paused_for_integration_testing",
"feedback_loop_completed": false,
"current_task_index": 0,
"current_gate": 0,
"current_subtask_index": 0,
"tasks": [
{
"id": "T-001",
"title": "Task title",
"status": "pending|in_progress|completed|failed|blocked",
"feedback_loop_completed": false,
"subtasks": [
{
"id": "ST-001-01",
"file": "subtasks/T-001/ST-001-01.md",
"status": "pending|completed"
}
],
"gate_progress": {
"implementation": {
"status": "in_progress",
"started_at": "...",
"tdd_red": {
"status": "pending|in_progress|completed",
"test_file": "path/to/test_file.go",
"failure_output": "FAIL: TestFoo - expected X got nil",
"completed_at": "ISO timestamp"
},
"tdd_green": {
"status": "pending|in_progress|completed",
"implementation_file": "path/to/impl.go",
"test_pass_output": "PASS: TestFoo (0.003s)",
"completed_at": "ISO timestamp"
}
},
"devops": {"status": "pending"},
"sre": {"status": "pending"},
"testing": {"status": "pending"},
"review": {"status": "pending"},
"validation": {"status": "pending"}
},
"artifacts": {},
"agent_outputs": {
"implementation": {
"agent": "backend-engineer-golang",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0,
"iterations": 1,
"standards_compliance": {
"total_sections": 15,
"compliant": 14,
"not_applicable": 1,
"non_compliant": 0,
"gaps": []
}
},
"devops": {
"agent": "devops-engineer",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0,
"iterations": 1,
"artifacts_created": ["Dockerfile", "docker-compose.yml", ".env.example"],
"verification_errors": [],
"standards_compliance": {
"total_sections": 8,
"compliant": 8,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"sre": {
"agent": "sre",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0,
"iterations": 1,
"instrumentation_coverage": "92%",
"validation_errors": [],
"standards_compliance": {
"total_sections": 10,
"compliant": 10,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"testing": {
"agent": "qa-analyst",
"output": "## Summary\n...",
"verdict": "PASS",
"coverage_actual": 87.5,
"coverage_threshold": 85,
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"failures": [],
"uncovered_criteria": [],
"standards_compliance": {
"total_sections": 6,
"compliant": 6,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"review": {
"iterations": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0,
"code_reviewer": {
"agent": "code-reviewer",
"output": "...",
"verdict": "PASS",
"timestamp": "...",
"issues": [],
"standards_compliance": {
"total_sections": 12,
"compliant": 12,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"business_logic_reviewer": {
"agent": "business-logic-reviewer",
"output": "...",
"verdict": "PASS",
"timestamp": "...",
"issues": [],
"standards_compliance": {
"total_sections": 8,
"compliant": 8,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
},
"security_reviewer": {
"agent": "security-reviewer",
"output": "...",
"verdict": "PASS",
"timestamp": "...",
"issues": [],
"standards_compliance": {
"total_sections": 10,
"compliant": 10,
"not_applicable": 0,
"non_compliant": 0,
"gaps": []
}
}
},
"validation": {
"result": "approved|rejected",
"timestamp": "ISO timestamp"
}
}
}
],
"metrics": {
"total_duration_ms": 0,
"gate_durations": {},
"review_iterations": 0,
"testing_iterations": 0
}
}
Structured Error/Issue Schemas
These schemas enable dev-feedback-loop to analyze issues without parsing raw output.
Standards Compliance Gap Schema
{
"section": "Error Handling (MANDATORY)",
"status": "❌",
"reason": "Missing error wrapping with context",
"file": "internal/handler/user.go",
"line": 45,
"evidence": "return err // should wrap with additional context"
}
Test Failure Schema
{
"test_name": "TestUserCreate_InvalidEmail",
"test_file": "internal/handler/user_test.go",
"error_type": "assertion",
"expected": "ErrInvalidEmail",
"actual": "nil",
"message": "Expected validation error for invalid email format",
"stack_trace": "user_test.go:42 → user.go:28"
}
Review Issue Schema
{
"severity": "MEDIUM",
"category": "error-handling",
"description": "Error not wrapped with context before returning",
"file": "internal/handler/user.go",
"line": 45,
"suggestion": "Use fmt.Errorf(\"failed to create user: %w\", err)",
"fixed": false,
"fixed_in_iteration": null
}
DevOps Verification Error Schema
{
"check": "docker_build",
"status": "FAIL",
"error": "COPY failed: file not found in build context: go.sum",
"suggestion": "Ensure go.sum exists and is not in .dockerignore"
}
SRE Validation Error Schema
{
"check": "structured_logging",
"status": "FAIL",
"file": "internal/handler/user.go",
"line": 32,
"error": "Using fmt.Printf instead of structured logger",
"suggestion": "Use logger.Info().Str(\"user_id\", id).Msg(\"user created\")"
}
Populating Structured Data
Each gate MUST populate its structured fields when saving to state:
| Gate | Fields to Populate |
|---|---|
| Gate 0 (Implementation) | standards_compliance (total, compliant, gaps[]) |
| Gate 1 (DevOps) | standards_compliance + verification_errors[] |
| Gate 2 (SRE) | standards_compliance + validation_errors[] |
| Gate 3 (Testing) | standards_compliance + failures[] + uncovered_criteria[] |
| Gate 4 (Review) | standards_compliance per reviewer + issues[] per reviewer |
All gates track standards_compliance:
total_sections: Count from agent's standards file (via standards-coverage-table.md)compliant: Sections marked ✅ in Standards Coverage Tablenot_applicable: Sections marked N/Anon_compliant: Sections marked ❌ (MUST be 0 to pass gate)gaps[]: Detailed info for each ❌ section (even if later fixed)
Empty arrays [] indicate no issues found - this is valid data for feedback-loop.
⛔ State Persistence Rule (MANDATORY)
"Update state" means BOTH update the object AND write to file. Not just in-memory.
After EVERY Gate Transition
You MUST execute these steps after completing ANY gate (0, 1, 2, 3, 4, or 5):
# Step 1: Update state object with gate results
state.tasks[current_task_index].gate_progress.[gate_name].status = "completed"
state.tasks[current_task_index].gate_progress.[gate_name].completed_at = "[ISO timestamp]"
state.current_gate = [next_gate_number]
state.updated_at = "[ISO timestamp]"
# Step 2: Write to file (MANDATORY - use Write tool)
Write tool:
file_path: [state.state_path] # Use state_path from state object
content: [full JSON state]
# Step 3: Verify persistence (MANDATORY - use Read tool)
Read tool:
file_path: [state.state_path] # Use state_path from state object
# Confirm current_gate and gate_progress match expected values
State Persistence Checkpoints
| After | MUST Update | MUST Write File |
|---|---|---|
| Gate 0.1 (TDD-RED) | tdd_red.status, tdd_red.failure_output |
✅ YES |
| Gate 0.2 (TDD-GREEN) | tdd_green.status, implementation.status |
✅ YES |
| Gate 1 (DevOps) | devops.status, agent_outputs.devops |
✅ YES |
| Gate 2 (SRE) | sre.status, agent_outputs.sre |
✅ YES |
| Gate 3 (Testing) | testing.status, agent_outputs.testing |
✅ YES |
| Gate 4 (Review) | review.status, agent_outputs.review |
✅ YES |
| Gate 5 (Validation) | validation.status, task status |
✅ YES |
| Step 7.1 (Unit Approval) | status = "paused_for_approval" |
✅ YES |
| Step 7.2 (Task Approval) | status = "paused_for_task_approval" |
✅ YES |
Anti-Rationalization for State Persistence
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I'll save state at the end" | Crash/timeout loses ALL progress | Save after EACH gate |
| "State is in memory, that's updated" | Memory is volatile. File is persistent. | Write to JSON file |
| "Only save on checkpoints" | Gates without saves = unrecoverable on resume | Save after EVERY gate |
| "Write tool is slow" | Write takes <100ms. Lost progress takes hours. | Write after EVERY gate |
| "I updated the state variable" | Variable ≠ file. Without Write tool, nothing persists. | Use Write tool explicitly |
Verification Command
After each gate, the state file MUST reflect:
current_gate= next gate numberupdated_at= recent timestamp- Previous gate
status= "completed"
If verification fails → State was not persisted. Re-execute Write tool.
Step 0: Verify PROJECT_RULES.md Exists (HARD GATE)
NON-NEGOTIABLE. Cycle CANNOT proceed without project standards.
Step 0 Flow
┌─────────────────────────────────────────────────────────────────────────────┐
│ Check: Does docs/PROJECT_RULES.md exist? │
│ │
│ ├── YES → Proceed to Step 1 (Initialize or Resume) │
│ │ │
│ └── NO → ASK: "Is this a LEGACY project (created without PM workflow)?" │
│ │ │
│ ├── YES (legacy project) → LEGACY PROJECT ANALYSIS: │
│ │ Step 1: Dispatch codebase-explorer (technical info only) │
│ │ Step 2: Ask 3 questions (what agent can't determine): │
│ │ 1. What do you need help with? │
│ │ 2. Any external APIs not visible in code? │
│ │ 3. Any specific technology not in Ring Standards? │
│ │ Step 3: Generate PROJECT_RULES.md (deduplicated from Ring) │
│ │ Note: Business rules belong in PRD, NOT in PROJECT_RULES │
│ │ → Proceed to Step 1 │
│ │ │
│ └── NO (new project) → ASK: "Do you have PRD, TRD, or Feature Map?" │
│ │ │
│ ├── YES (has PM docs) → "Please provide the file path(s)" │
│ │ → Read PRD/TRD/Feature Map → Extract info │
│ │ → Generate PROJECT_RULES.md │
│ │ → Ask supplementary questions if info is incomplete │
│ │ → Save and proceed to Step 1 │
│ │ │
│ └── NO (no PM docs) → ⛔ HARD BLOCK: │
│ "PM documents are REQUIRED for new projects. │
│ Run /pre-dev-full or /pre-dev-feature first." │
│ → STOP (cycle cannot proceed) │
└─────────────────────────────────────────────────────────────────────────────┘
Step 0.1: Check for PROJECT_RULES.md
# Check if file exists
Read tool:
file_path: "docs/PROJECT_RULES.md"
# If file exists and has content → Proceed to Step 1
# If file does not exist OR is empty → Continue to Step 0.2
Step 0.2: Check if Legacy Project
Ask the User
Use AskUserQuestion:
┌─────────────────────────────────────────────────────────────────┐
│ 📋 PROJECT_RULES.md NOT FOUND │
├─────────────────────────────────────────────────────────────────┤
│ │
│ I need to create docs/PROJECT_RULES.md to understand your │
│ project's specific conventions and domain. │
│ │
│ First, I need to know: Is this a LEGACY project? │
│ │
│ A legacy project is one that was created WITHOUT using the │
│ PM team workflow (no PRD, TRD, or Feature Map documents). │
│ │
└─────────────────────────────────────────────────────────────────┘
Question
"Is this a legacy project (created without PM team workflow)?"
Options
(a) Yes, this is a legacy project (b) No, this is a new project following Ring workflow
If YES (legacy)
Go to Step 0.2.1 (Legacy Project Analysis)
If NO (new project)
Go to Step 0.3 (Check for PM Documents)
Step 0.2.1: Legacy Project Analysis (Technical Only)
Overview
For legacy projects, analyze codebase for TECHNICAL information only:
┌─────────────────────────────────────────────────────────────────┐
│ 📋 LEGACY PROJECT ANALYSIS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Since this is a legacy project, I'll analyze the codebase │
│ for TECHNICAL information (not business rules). │
│ │
│ Step 1: Automated analysis (codebase-explorer) │
│ Step 2: Ask for project-specific tech not in Ring Standards │
│ Step 3: Generate PROJECT_RULES.md (deduplicated) │
│ │
│ Note: Business rules belong in PRD/product docs, NOT here. │
│ │
└─────────────────────────────────────────────────────────────────┘
Step 0.2.1a: Automated Codebase Analysis (MANDATORY)
⛔ You MUST use the Task tool to dispatch codebase-explorer. This is NOT implicit.
Dispatch Agent
Dispatch codebase-explorer to analyze the legacy project for TECHNICAL information:
Action: Use Task tool with EXACTLY these parameters:
┌─────────────────────────────────────────────────────────────────────────────────┐
│ ⛔ If Task tool NOT used → Analysis does NOT happen → PROJECT_RULES.md INVALID │
└─────────────────────────────────────────────────────────────────────────────────┘
# Agent 1: Codebase Explorer - Technical Analysis
Task tool:
subagent_type: "codebase-explorer"
model: "opus"
description: "Analyze legacy project for PROJECT_RULES.md"
prompt: |
Analyze this LEGACY codebase to extract technical information for PROJECT_RULES.md.
This is an existing project created without PM documentation.
Your job is to understand what exists in the code.
**Extract:**
1. **Project Structure:** Directory layout, module organization
2. **Technical Stack:** Languages, frameworks, databases, external services
3. **Architecture Patterns:** Clean Architecture, MVC, microservices, etc.
4. **Existing Features:** Main modules, endpoints, capabilities
5. **Internal Libraries:** Shared packages, utilities
6. **Configuration:** Environment variables, config patterns
7. **Database:** Schema patterns, migrations, ORM usage
8. **External Integrations:** APIs consumed, message queues
**Output format:**
## Technical Analysis (Legacy Project)
### Project Overview
[What this project appears to do based on code analysis]
### Technical Stack
- Language: [detected]
- Framework: [detected]
- Database: [detected]
- External Services: [detected]
### Architecture Patterns
[Detected patterns]
### Existing Features
[List of features/modules found]
### Project Structure
[Directory layout explanation]
### Configuration
[Env vars, config files found]
### External Integrations
[APIs, services detected]
Note: Business logic analysis is NOT needed for PROJECT_RULES.md. Business rules belong in PRD/product docs, not technical project rules.
Verification (MANDATORY)
After agent completes, confirm:
-
codebase-explorerreturned "## Technical Analysis (Legacy Project)" section - Output contains non-empty content for: Tech Stack, External Integrations, Configuration
If agent failed or returned empty output → Re-dispatch. Cannot proceed without technical analysis.
Step 0.2.1b: Supplementary Questions (Only What Agents Can't Determine)
Post-Analysis Questions
After agents complete, ask ONLY what they couldn't determine from code:
┌─────────────────────────────────────────────────────────────────┐
│ ✓ Codebase Analysis Complete │
├─────────────────────────────────────────────────────────────────┤
│ │
│ I've analyzed your codebase. Now I need a few details that │
│ only you can provide (not visible in the code). │
│ │
└─────────────────────────────────────────────────────────────────┘
Questions to Ask
Use AskUserQuestion for each:
| # | Question | Why Agents Can't Determine This |
|---|---|---|
| 1 | What do you need help with? (Current task/feature/fix) | Future intent, not in code |
| 2 | Any external APIs or services not visible in code? (Third-party integrations planned) | Planned integrations, not yet in code |
| 3 | Any specific technology not in Ring Standards? (Message broker, cache, etc.) | Project-specific tech not in Ring |
Note: Business rules belong in PRD/product docs, NOT in PROJECT_RULES.md.
Step 0.2.1c: Generate PROJECT_RULES.md
Combine Agent Outputs and User Answers
Create tool:
file_path: "docs/PROJECT_RULES.md"
content: |
# Project Rules
> Ring Standards apply automatically. This file documents ONLY what Ring does NOT cover.
> For error handling, logging, testing, architecture, lib-commons → See Ring Standards (auto-loaded by agents)
> Generated from legacy project analysis.
## What Ring Standards Already Cover (DO NOT ADD HERE)
The following are defined in Ring Standards and MUST NOT be duplicated:
- Error handling patterns (no panic, wrap errors)
- Logging standards (structured JSON, zerolog/zap)
- Testing patterns (table-driven tests, mocks)
- Architecture patterns (Hexagonal, Clean Architecture)
- Observability (OpenTelemetry, trace correlation)
- lib-commons usage and patterns
- API directory structure
---
## Tech Stack (Not in Ring Standards)
[From codebase-explorer: Technologies NOT covered by Ring Standards]
[e.g., specific message broker, specific cache, DB if not PostgreSQL]
| Technology | Purpose | Notes |
|------------|---------|-------|
| [detected] | [purpose] | [notes] |
## Non-Standard Directory Structure
[From codebase-explorer: Directories that deviate from Ring's standard API structure]
[e.g., workers/, consumers/, polling/]
| Directory | Purpose | Pattern |
|-----------|---------|---------|
| [detected] | [purpose] | [pattern] |
## External Integrations
[From codebase-explorer: Third-party services specific to this project]
| Service | Purpose | Docs |
|---------|---------|------|
| [detected] | [purpose] | [link] |
## Environment Configuration
[From codebase-explorer: Project-specific env vars NOT covered by Ring]
| Variable | Purpose | Example |
|----------|---------|---------|
| [detected] | [purpose] | [example] |
## Domain Terminology
[From codebase analysis: Technical names used in this codebase]
| Term | Definition | Used In |
|------|------------|---------|
| [detected] | [definition] | [location] |
---
*Generated: [ISO timestamp]*
*Source: Legacy project analysis (codebase-explorer)*
*Ring Standards Version: [version from WebFetch]*
Present to User
┌─────────────────────────────────────────────────────────────────┐
│ ✓ PROJECT_RULES.md Generated for Legacy Project │
├─────────────────────────────────────────────────────────────────┤
│ │
│ I analyzed your codebase using: │
│ • codebase-explorer (technical patterns, stack, structure) │
│ │
│ Combined with your input on: │
│ • Current development goal │
│ • External integrations │
│ • Project-specific technology │
│ │
│ Generated: docs/PROJECT_RULES.md │
│ │
│ Note: Ring Standards (error handling, logging, testing, etc.) │
│ are NOT duplicated - agents load them automatically via WebFetch│
│ │
│ Please review the file and make any corrections needed. │
│ │
└─────────────────────────────────────────────────────────────────┘
Ask for Approval
Use AskUserQuestion:
- Question: "PROJECT_RULES.md has been generated. Would you like to review it before proceeding?"
- Options: (a) Proceed (b) Open for editing first
After Approval
Proceed to Step 1
Step 0.3: Check for PM Documents (PRD/TRD/Feature Map)
Check for PM Documents
For NEW projects (not legacy), ask about PM documents:
┌─────────────────────────────────────────────────────────────────┐
│ 📋 NEW PROJECT - PM DOCUMENTS CHECK │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Since this is a new project following Ring workflow, you │
│ should have PM documents from the pre-dev workflow. │
│ │
│ Do you have any of these PM documents? │
│ • PRD (Product Requirements Document) │
│ • TRD (Technical Requirements Document) │
│ • Feature Map (from pre-dev-feature-map skill) │
│ │
└─────────────────────────────────────────────────────────────────┘
Question
"Do you have PRD, TRD, or Feature Map documents for this project?"
Options
(a) Yes, I have PM documents (b) No, I don't have these documents
If YES - Ask for File Paths
"Please provide the file path(s) to your PM documents:
- PRD path (or 'skip' if none):
- TRD path (or 'skip' if none):
- Feature Map path (or 'skip' if none): "
Example Paths
Typical PM team output structure:
docs/pre-dev/{feature-name}/
├── prd.md → PRD path: docs/pre-dev/auth-system/prd.md
├── trd.md → TRD path: docs/pre-dev/auth-system/trd.md
├── feature-map.md → Feature Map path: docs/pre-dev/auth-system/feature-map.md
├── api-design.md
├── data-model.md
└── tasks.md
Common Patterns
/pre-dev-fulloutput:docs/pre-dev/{feature}/prd.md,trd.md,feature-map.md/pre-dev-featureoutput:docs/pre-dev/{feature}/prd.md,feature-map.md- Custom locations: User may have docs in different paths (e.g.,
requirements/,specs/)
Then
Go to Step 0.3.1 (Generate from PM Documents)
If NO
HARD BLOCK (Step 0.3.2)
Step 0.3.1: Generate from PM Documents (PRD/TRD/Feature Map)
Read the Provided Documents
# Read PRD if provided
Read tool:
file_path: "[user-provided PRD path]"
# Read TRD if provided
Read tool:
file_path: "[user-provided TRD path]"
# Read Feature Map if provided
Read tool:
file_path: "[user-provided Feature Map path]"
Extract PROJECT_RULES.md Content from PM Documents
⛔ DEDUPLICATION RULE: Extract ONLY what Ring Standards do NOT cover.
| From PRD | Extract For PROJECT_RULES.md | Note |
|---|---|---|
| Domain terms, entities | Domain Terminology | Technical names only |
| External service mentions | External Integrations | Third-party APIs |
| ❌ Stays in PRD, not PROJECT_RULES | ||
| ❌ Ring Standards covers this |
| From TRD | Extract For PROJECT_RULES.md | Note |
|---|---|---|
| Tech stack not in Ring | Tech Stack (Not in Ring) | Only non-standard tech |
| External APIs | External Integrations | Third-party services |
| Non-standard directories | Non-Standard Directory Structure | Workers, consumers, etc. |
| ❌ Ring Standards covers this | ||
| ❌ Ring Standards covers this |
| From Feature Map | Extract For PROJECT_RULES.md | Note |
|---|---|---|
| Technology choices not in Ring | Tech Stack (Not in Ring) | Only if not in Ring |
| External dependencies | External Integrations | Third-party services |
| ❌ Ring Standards covers this |
Generate PROJECT_RULES.md
Create tool:
file_path: "docs/PROJECT_RULES.md"
content: |
# Project Rules
> ⛔ IMPORTANT: Ring Standards are NOT automatic. Agents MUST WebFetch them before implementation.
> This file documents ONLY project-specific information not covered by Ring Standards.
> Generated from PM documents (PRD/TRD/Feature Map).
>
> Ring Standards URLs:
> - Go: https://raw.githubusercontent.com/LerianStudio/ring/main/dev-team/docs/standards/golang.md
> - TypeScript: https://raw.githubusercontent.com/LerianStudio/ring/main/dev-team/docs/standards/typescript.md
## What Ring Standards Cover (DO NOT DUPLICATE HERE)
The following are defined in Ring Standards and MUST NOT be duplicated in this file:
- Error handling patterns (no panic, wrap errors)
- Logging standards (structured JSON via lib-commons)
- Testing patterns (table-driven tests, mocks)
- Architecture patterns (Hexagonal, Clean Architecture)
- Observability (OpenTelemetry via lib-commons)
- lib-commons / lib-common-js usage and patterns
- API directory structure (Lerian pattern)
- Database connections (PostgreSQL, MongoDB, Redis via lib-commons)
- Bootstrap pattern (config.go, service.go, server.go)
**Agents MUST WebFetch Ring Standards and output Standards Coverage Table.**
---
## Tech Stack (Not in Ring Standards)
[From TRD/Feature Map: ONLY technologies NOT covered by Ring Standards]
| Technology | Purpose | Notes |
|------------|---------|-------|
| [detected] | [purpose] | [notes] |
## Non-Standard Directory Structure
[From TRD: Directories that deviate from Ring's standard API structure]
| Directory | Purpose | Pattern |
|-----------|---------|---------|
| [detected] | [purpose] | [pattern] |
## External Integrations
[From TRD/PRD: Third-party services specific to this project]
| Service | Purpose | Docs |
|---------|---------|------|
| [detected] | [purpose] | [link] |
## Environment Configuration
[From TRD: Project-specific env vars NOT covered by Ring]
| Variable | Purpose | Example |
|----------|---------|---------|
| [detected] | [purpose] | [example] |
## Domain Terminology
[From PRD: Technical names used in this codebase]
| Term | Definition | Used In |
|------|------------|---------|
| [detected] | [definition] | [location] |
---
*Generated from: [PRD path], [TRD path], [Feature Map path]*
*Ring Standards Version: [version from WebFetch]*
*Generated: [ISO timestamp]*
Check for Missing Information
If any section is empty or incomplete, ask supplementary questions:
| Missing Section | Supplementary Question |
|---|---|
| Tech Stack (Not in Ring) | "Any technology not covered by Ring Standards (message broker, cache, etc.)?" |
| External Integrations | "Any third-party APIs or external services?" |
| Domain Terminology | "What are the main entities/classes in this codebase?" |
| Non-Standard Directories | "Any directories that don't follow standard API structure (workers, consumers)?" |
Note: Do NOT ask about architecture, error handling, logging, testing - Ring Standards covers these.
After Generation
Present to user for review, then proceed to Step 1.
Step 0.3.2: HARD BLOCK - No PM Documents (New Projects Only)
When User Has No PM Documents
┌─────────────────────────────────────────────────────────────────┐
│ ⛔ CANNOT PROCEED - PM DOCUMENTS REQUIRED │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Development cannot start without PM documents. │
│ │
│ You MUST create PRD, TRD, and/or Feature Map documents first │
│ using PM team skills: │
│ │
│ /pre-dev-full → For features ≥2 days (9 gates) │
│ /pre-dev-feature → For features <2 days (4 gates) │
│ │
│ These commands will guide you through creating: │
│ • PRD (Product Requirements Document) │
│ • TRD (Technical Requirements Document) │
│ • Feature Map (technology choices, feature relationships) │
│ │
│ After completing pre-dev workflow, run /dev-cycle again. │
│ │
└─────────────────────────────────────────────────────────────────┘
Action
STOP EXECUTION. Do NOT proceed to Step 1.
Step 0 Anti-Rationalization
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Skip PM docs, I'll add them later" | Later = never. No PM docs = no project context = agents guessing. | Run /pre-dev-full or /pre-dev-feature NOW |
| "Project is simple, doesn't need PM docs" | Simple projects still need domain context defined upfront. | Create PM documents first |
| "I know what I want to build" | Your knowledge ≠ documented knowledge agents can use. | Document in PRD/TRD/Feature Map |
| "PM workflow takes too long" | PM workflow takes 30-60 min. Rework from unclear requirements takes days. | Invest time upfront |
| "Just let me start coding" | Coding without requirements = building the wrong thing. | Requirements first, code second |
| "It's legacy but I don't want to answer questions" | Legacy analysis takes ~5 min. Without it, agents have zero context. | Answer the 4 questions |
| "Legacy project is too complex to explain" | Start with high-level answers. PROJECT_RULES.md can be refined later. | Provide what you know NOW |
Pressure Resistance
| User Says | Your Response |
|---|---|
| "Just skip this, I'll create PM docs later" | "PM documents are REQUIRED for new projects. Without them, agents cannot understand your project's domain context or technical requirements. Run /pre-dev-full or /pre-dev-feature first." |
| "I don't need formal documents" | "PM documents are the source of truth for PROJECT_RULES.md. Development cannot start without documented requirements." |
| "This is just a quick prototype" | "Even prototypes need clear requirements. /pre-dev-feature takes ~30 minutes and prevents hours of rework." |
| "I already explained what I want verbally" | "Verbal explanations cannot be used by agents. Requirements MUST be documented in PRD/TRD/Feature Map files." |
| "It's a legacy project but skip the questions" | "The legacy analysis (codebase-explorer + 3 questions) is the only way I can understand your project. It takes ~5 minutes and enables me to help you effectively." |
| "I'll fill in PROJECT_RULES.md myself" | "That works! Create docs/PROJECT_RULES.md with: Tech Stack (not in Ring), External Integrations, Domain Terminology. Do NOT duplicate Ring Standards content. Then run /dev-cycle again." |
Step 1: Initialize or Resume
New Cycle (with task file path)
Input: path/to/tasks.md OR path/to/pre-dev/{feature}/
- Detect input: File → Load directly | Directory → Load tasks.md + discover subtasks/
- Build order: Read tasks, check for subtasks (ST-XXX-01, 02...) or TDD autonomous mode
- Determine state path:
- IF source_file contains
docs/refactor/→state_path = "docs/dev-refactor/current-cycle.json",cycle_type = "refactor" - ELSE →
state_path = "docs/dev-cycle/current-cycle.json",cycle_type = "feature"
- IF source_file contains
- Initialize state: Generate cycle_id, create state file at
{state_path}, set indices to 0 - Display plan: "Loaded X tasks with Y subtasks"
- ASK EXECUTION MODE (MANDATORY - AskUserQuestion):
- Options: (a) Manual per subtask (b) Manual per task (c) Automatic
- Do NOT skip: User hints ≠ mode selection. Only explicit a/b/c is valid.
- ASK COMMIT TIMING (MANDATORY - AskUserQuestion):
- Options: (a) Per subtask (b) Per task (c) At the end
- Store in
commit_timingfield in state
- Start: Display mode + commit timing, proceed to Gate 0
Resume Cycle (--resume flag)
- Find existing state file:
- Check
docs/dev-cycle/current-cycle.jsonfirst - If not found, check
docs/dev-refactor/current-cycle.json - If neither exists → Error: "No cycle to resume"
- Check
- Load found state file, validate (state_path is stored in the state object)
- Display: cycle started, tasks completed/total, current task/subtask/gate, paused reason
- Handle paused states:
| Status | Action |
|---|---|
paused_for_approval |
Re-present Step 7.1 checkpoint |
paused_for_testing |
Ask if testing complete → continue or keep paused |
paused_for_task_approval |
Re-present Step 7.2 checkpoint |
paused_for_integration_testing |
Ask if integration testing complete |
paused (generic) |
Ask user to confirm resume |
in_progress |
Resume from current gate |
Input Validation
Task files are generated by /pre-dev-* or /dev-refactor, which handle content validation. The dev-cycle performs basic format checks:
Format Checks
| Check | Validation | Action |
|---|---|---|
| File exists | Task file path is readable | Error: abort |
| Task headers | At least one ## Task: found |
Error: abort |
| Task ID format | ## Task: {ID} - {Title} |
Warning: use line number as ID |
| Acceptance criteria | At least one - [ ] per task |
Warning: task may fail validation gate |
Step 2: Gate 0 - Implementation (Per Execution Unit)
REQUIRED SUB-SKILL: Use dev-implementation
Execution Unit: Task (if no subtasks) OR Subtask (if task has subtasks)
⛔ MANDATORY: Agent Dispatch Required
See shared-patterns/shared-orchestrator-principle.md for full details.
Gate 0 has TWO explicit sub-phases with a HARD GATE between them:
┌─────────────────────────────────────────────────────────────────┐
│ GATE 0.1: TDD-RED │
│ ───────────────── │
│ Write failing test → Run test → Capture FAILURE output │
│ │
│ ═══════════════════ HARD GATE ═══════════════════════════════ │
│ CANNOT proceed to 0.2 until failure output is captured │
│ ════════════════════════════════════════════════════════════ │
│ │
│ GATE 0.2: TDD-GREEN │
│ ────────────────── │
│ Implement minimal code → Run test → Verify PASS │
└─────────────────────────────────────────────────────────────────┘
Step 2.1: Gate 0.1 - TDD-RED (Write Failing Test)
Record gate start timestamp
Set
gate_progress.implementation.tdd_red.status = "in_progress"Determine appropriate agent based on content:
- Go files/go.mod → backend-engineer-golang
- TypeScript backend → backend-engineer-typescript
- React/Frontend → frontend-engineer-typescript
- Infrastructure → devops-engineer
Dispatch to selected agent for TDD-RED ONLY:
See shared-patterns/template-tdd-prompts.md for the TDD-RED prompt template.
Include: unit_id, title, requirements, acceptance_criteria in the prompt.
Receive TDD-RED report from agent
VERIFY FAILURE OUTPUT EXISTS (HARD GATE): See shared-patterns/template-tdd-prompts.md for verification rules.
Update state:
gate_progress.implementation.tdd_red = { "status": "completed", "test_file": "[path from agent]", "failure_output": "[actual failure output from agent]", "completed_at": "[ISO timestamp]" }Display to user:
┌─────────────────────────────────────────────────┐ │ ✓ TDD-RED COMPLETE │ ├─────────────────────────────────────────────────┤ │ Test: [test_file]:[test_function] │ │ Failure: [first line of failure output] │ │ │ │ Proceeding to TDD-GREEN... │ └─────────────────────────────────────────────────┘Proceed to Gate 0.2
Step 2.2: Gate 0.2 - TDD-GREEN (Implementation)
PREREQUISITE: gate_progress.implementation.tdd_red.status == "completed"
Set
gate_progress.implementation.tdd_green.status = "in_progress"Dispatch to same agent for TDD-GREEN:
See shared-patterns/template-tdd-prompts.md for the TDD-GREEN prompt template (includes observability requirements).
Include: unit_id, title, tdd_red.test_file, tdd_red.failure_output in the prompt.
Receive TDD-GREEN report from agent
VERIFY PASS OUTPUT EXISTS (HARD GATE): See shared-patterns/template-tdd-prompts.md for verification rules.
Update state:
gate_progress.implementation.tdd_green = { "status": "completed", "implementation_file": "[path from agent]", "test_pass_output": "[actual pass output from agent]", "completed_at": "[ISO timestamp]" } gate_progress.implementation.status = "completed" artifacts.implementation = {files_changed, commit_sha} agent_outputs.implementation = { agent: "[selected_agent]", output: "[full agent output for feedback analysis]", timestamp: "[ISO timestamp]", duration_ms: [execution time], iterations: [standards_compliance_iterations from Step 2.3], standards_compliance: { total_sections: [N from final verification], compliant: [N sections with ✅], not_applicable: [N sections with N/A], non_compliant: 0, gaps: [] // Empty when gate passes; populated during iterations } }If iterations > 1, populate
gaps[]with issues found in previous iterations:gaps: [ { "section": "[section name from ❌ row]", "status": "❌", "reason": "[why it failed]", "file": "[file path if available]", "line": [line number if available], "evidence": "[code snippet or description]", "fixed_in_iteration": [iteration number when fixed] } ]Display to user:
┌─────────────────────────────────────────────────┐ │ ✓ GATE 0 COMPLETE (TDD-RED → TDD-GREEN) │ ├─────────────────────────────────────────────────┤ │ RED: [test_file] - FAIL captured ✓ │ │ GREEN: [impl_file] - PASS verified ✓ │ │ │ │ Proceeding to Gate 1 (DevOps)... │ └─────────────────────────────────────────────────┘Proceed to Step 2.3 (Standards Compliance Verification)
Step 2.3: Standards Compliance Verification (HARD GATE)
PREREQUISITE: gate_progress.implementation.tdd_green.status == "completed"
Purpose: Verify implementation follows ALL standards defined in agent's standards file.
See shared-patterns/template-tdd-prompts.md → "Orchestrator Enforcement" section for full verification process.
Process:
Step 2.3.1: Initialize Iteration Counter
Set standards_compliance_iterations = 0
Set MAX_ITERATIONS = 3
Step 2.3.2: Verification Loop
LOOP while standards_compliance_iterations < MAX_ITERATIONS:
1. INCREMENT counter FIRST (before any verification):
standards_compliance_iterations += 1
2. PARSE agent output for "## Standards Coverage Table":
IF NOT FOUND:
→ Output INCOMPLETE
→ Log: "Iteration [N]: Standards Coverage Table missing"
→ Re-dispatch agent (see Step 2.3.3)
→ CONTINUE loop
3. PARSE "ALL STANDARDS MET:" value:
IF NOT FOUND:
→ Output INCOMPLETE
→ Log: "Iteration [N]: Compliance Summary missing"
→ Re-dispatch agent (see Step 2.3.3)
→ CONTINUE loop
4. COUNT sections from Standards Coverage Table:
total_sections = count all rows
compliant = count rows with ✅
not_applicable = count rows with N/A
non_compliant = count rows with ❌
5. VERIFY compliance:
IF "ALL STANDARDS MET: ✅ YES" AND non_compliant == 0:
→ Gate 0 PASSED
→ BREAK loop (proceed to Step 2.3.4)
IF "ALL STANDARDS MET: ❌ NO" OR non_compliant > 0:
→ Gate 0 BLOCKED
→ Extract ❌ sections
→ Log: "Iteration [N]: [non_compliant] sections non-compliant"
→ Re-dispatch agent (see Step 2.3.3)
→ CONTINUE loop
END LOOP
IF standards_compliance_iterations >= MAX_ITERATIONS AND non_compliant > 0:
→ HARD BLOCK
→ Update state with failure info
→ Report to user: "Standards compliance failed after 3 attempts"
→ STOP execution
Step 2.3.3: Re-dispatch Agent for Compliance Fix
Task tool:
subagent_type: "[same agent from TDD-GREEN]"
model: "opus"
description: "Fix missing Ring Standards for [unit_id] (attempt [N]/3)"
prompt: |
⛔ STANDARDS NOT MET - Fix Required (Attempt [standards_compliance_iterations] of 3)
Your Standards Coverage Table shows these sections as ❌:
[list ❌ sections extracted from table]
WebFetch your standards file:
[URL for agent's standards - golang.md, typescript.md, etc.]
Implement ALL missing sections.
Return updated Standards Coverage Table with ALL ✅ or N/A.
Previous attempt summary:
- Total sections: [total_sections]
- Compliant: [compliant]
- Not applicable: [not_applicable]
- Non-compliant: [non_compliant]
Step 2.3.4: Update State with Compliance Metrics
gate_progress.implementation = {
"status": "completed",
"tdd_red": {...},
"tdd_green": {...},
"standards_verified": true,
"standards_compliance_iterations": [final count - e.g., 1, 2, or 3],
"standards_coverage": {
"total_sections": [N - from final successful verification],
"compliant": [N - sections with ✅],
"not_applicable": [N - sections with N/A],
"non_compliant": 0
}
}
Note: non_compliant MUST be 0 when gate passes. If non-zero after 3 iterations, gate is BLOCKED.
Display to user:
┌─────────────────────────────────────────────────┐ │ ✓ GATE 0 COMPLETE │ ├─────────────────────────────────────────────────┤ │ TDD-RED: [test_file] - FAIL captured ✓ │ │ TDD-GREEN: [impl_file] - PASS verified ✓ │ │ STANDARDS: [N]/[N] sections compliant ✓ │ │ ITERATIONS: [standards_compliance_iterations] │ │ │ │ Proceeding to Gate 1 (DevOps)... │ └─────────────────────────────────────────────────┘⛔ SAVE STATE TO FILE (MANDATORY):
Write tool: file_path: [state.state_path] # "docs/dev-cycle/current-cycle.json" or "docs/dev-refactor/current-cycle.json" content: [full updated state JSON]See "State Persistence Rule" section. State MUST be written to file after Gate 0.
Proceed to Gate 1
Standards Compliance Anti-Rationalization
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Agent said implementation is complete" | Agent completion ≠ Standards compliance. Verify table. | Parse and verify Standards Coverage Table |
| "Table wasn't in agent output" | Missing table = Incomplete output = Re-dispatch | Re-dispatch agent to output table |
| "Only 1-2 sections are ❌" | ANY ❌ = BLOCKED. Count is irrelevant. | Re-dispatch to fix ALL ❌ sections |
| "lib-commons is the main thing" | ALL sections are equally required. No prioritization. | Verify ALL sections from standards-coverage-table.md |
| "Agent knows the standards" | Knowledge ≠ implementation. Evidence required. | Check file:line evidence in table |
| "Standards verification is slow" | Verification prevents rework. 30 seconds vs hours. | Always verify before proceeding |
TDD Sub-Phase Anti-Rationalization
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Test passes on first run, skip RED" | Passing test ≠ TDD. Test MUST fail first. | Delete test, rewrite to fail first |
| "I'll capture failure output later" | Later = never. Failure output is the gate. | Capture NOW or cannot proceed |
| "Failure output is in the logs somewhere" | "Somewhere" ≠ captured. Must be in state. | Extract and store in tdd_red.failure_output |
| "GREEN passed, RED doesn't matter now" | RED proves test validity. Skip = invalid test. | Re-run RED phase, capture failure |
| "Agent already did TDD internally" | Internal ≠ verified. State must show evidence. | Agent must output failure explicitly |
Step 3: Gate 1 - DevOps (Per Execution Unit)
REQUIRED SUB-SKILL: Use dev-devops
⛔ HARD GATE: Required Artifacts MUST Be Created
Gate 1 is a BLOCKING gate. DevOps agent MUST create ALL required artifacts. If ANY artifact is missing:
- You CANNOT proceed to Gate 2
- You MUST re-dispatch to devops-engineer to create missing artifacts
- You MUST verify ALL artifacts exist before proceeding
Required Artifacts
See shared-patterns/standards-coverage-table.md → "devops-engineer → devops.md" for ALL required sections.
Key artifacts from devops.md:
- Containers (Dockerfile + Docker Compose)
- Makefile Standards (ALL required commands)
- Infrastructure as Code (if applicable)
- Helm charts (if K8s deployment)
Step 3.1: Prepare Input for dev-devops Skill
Gather from previous gates:
devops_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
// REQUIRED - from Gate 0 context
language: state.current_unit.language, // "go" | "typescript" | "python"
service_type: state.current_unit.service_type, // "api" | "worker" | "batch" | "cli"
implementation_files: agent_outputs.implementation.files_changed, // list of files from Gate 0
// OPTIONAL - additional context
gate0_handoff: agent_outputs.implementation, // full Gate 0 output
new_dependencies: state.current_unit.new_deps || [], // new deps added in Gate 0
new_env_vars: state.current_unit.env_vars || [], // env vars needed
new_services: state.current_unit.services || [], // postgres, redis, etc.
existing_dockerfile: [check if Dockerfile exists],
existing_compose: [check if docker-compose.yml exists]
}
Step 3.2: Invoke dev-devops Skill
1. Record gate start timestamp
2. Invoke dev-devops skill with structured input:
Skill("dev-devops") with input:
unit_id: devops_input.unit_id
language: devops_input.language
service_type: devops_input.service_type
implementation_files: devops_input.implementation_files
gate0_handoff: devops_input.gate0_handoff
new_dependencies: devops_input.new_dependencies
new_env_vars: devops_input.new_env_vars
new_services: devops_input.new_services
existing_dockerfile: devops_input.existing_dockerfile
existing_compose: devops_input.existing_compose
The skill handles:
- Dispatching devops-engineer agent
- Dockerfile creation/update
- docker-compose.yml configuration
- .env.example documentation
- Verification commands execution
- Fix iteration loop (max 3 attempts)
3. Parse skill output for results:
Expected output sections:
- "## DevOps Summary" → status, iterations
- "## Files Changed" → Dockerfile, docker-compose, .env.example actions
- "## Verification Results" → build, startup, health checks
- "## Handoff to Next Gate" → ready_for_sre: YES/NO
IF skill output contains "Status: PASS" AND "Ready for Gate 2: YES":
→ Gate 1 PASSED. Proceed to Step 3.3.
IF skill output contains "Status: FAIL" OR "Ready for Gate 2: NO":
→ Gate 1 BLOCKED.
→ Skill already dispatched fixes to devops-engineer
→ Skill already re-ran verification
→ If "ESCALATION" in output: STOP and report to user
4. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → [state.state_path]
Step 3.3: Gate 1 Complete
5. When dev-devops skill returns PASS:
Parse from skill output:
- status: extract from "## DevOps Summary"
- dockerfile_action: extract from "## Files Changed" table
- compose_action: extract from "## Files Changed" table
- verification_passed: extract from "## Verification Results"
- agent_outputs.devops = {
skill: "dev-devops",
output: "[full skill output]",
artifacts_created: ["Dockerfile", "docker-compose.yml", ".env.example"],
verification_passed: true,
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
6. Update state:
- gate_progress.devops.status = "completed"
- gate_progress.devops.artifacts = [list from skill output]
7. Proceed to Gate 2
Gate 1 Anti-Rationalization Table
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Dockerfile exists, skip other artifacts" | ALL artifacts required. 1/4 ≠ complete. | Create ALL artifacts |
| "docker-compose not needed locally" | docker-compose is MANDATORY for local dev. | Create docker-compose.yml |
| "Makefile is optional" | Makefile is MANDATORY for standardized commands. | Create Makefile |
| ".env.example can be added later" | .env.example documents required config NOW. | Create .env.example |
| "Small service doesn't need all this" | Size is irrelevant. Standards apply uniformly. | Create ALL artifacts |
Step 4: Gate 2 - SRE (Per Execution Unit)
REQUIRED SUB-SKILL: Use dev-sre
Step 4.1: Prepare Input for dev-sre Skill
Gather from previous gates:
sre_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
// REQUIRED - from Gate 0 context
language: state.current_unit.language, // "go" | "typescript" | "python"
service_type: state.current_unit.service_type, // "api" | "worker" | "batch" | "cli"
implementation_agent: agent_outputs.implementation.agent, // e.g., "backend-engineer-golang"
implementation_files: agent_outputs.implementation.files_changed, // list of files from Gate 0
// OPTIONAL - additional context
external_dependencies: state.current_unit.external_deps || [], // HTTP clients, gRPC, queues
gate0_handoff: agent_outputs.implementation, // full Gate 0 output
gate1_handoff: agent_outputs.devops // full Gate 1 output
}
Step 4.2: Invoke dev-sre Skill
1. Record gate start timestamp
2. Invoke dev-sre skill with structured input:
Skill("dev-sre") with input:
unit_id: sre_input.unit_id
language: sre_input.language
service_type: sre_input.service_type
implementation_agent: sre_input.implementation_agent
implementation_files: sre_input.implementation_files
external_dependencies: sre_input.external_dependencies
gate0_handoff: sre_input.gate0_handoff
gate1_handoff: sre_input.gate1_handoff
The skill handles:
- Dispatching SRE agent for validation
- Structured logging validation
- Distributed tracing validation
- Code instrumentation coverage (90%+ required)
- Context propagation validation (InjectHTTPContext/InjectGRPCContext)
- Dispatching fixes to implementation agent if needed
- Re-validation loop (max 3 iterations)
3. Parse skill output for validation results:
Expected output sections:
- "## Validation Result" → status, iterations, coverage
- "## Instrumentation Coverage" → table with per-layer coverage
- "## Issues Found" → list or "None"
- "## Handoff to Next Gate" → ready_for_testing: YES/NO
IF skill output contains "Status: PASS" AND "Ready for Gate 3: YES":
→ Gate 2 PASSED. Proceed to Step 4.3.
IF skill output contains "Status: FAIL" OR "Ready for Gate 3: NO":
→ Gate 2 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran validation
→ If "ESCALATION" in output: STOP and report to user
4. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → [state.state_path]
Step 4.3: Gate 2 Complete
5. When dev-sre skill returns PASS:
Parse from skill output:
- status: extract from "## Validation Result"
- instrumentation_coverage: extract percentage from coverage table
- iterations: extract from "Iterations:" line
- agent_outputs.sre = {
skill: "dev-sre",
output: "[full skill output]",
validation_result: "PASS",
instrumentation_coverage: "[X%]",
iterations: [count],
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
6. Update state:
- gate_progress.sre.status = "completed"
- gate_progress.sre.observability_validated = true
- gate_progress.sre.instrumentation_coverage = "[X%]"
7. Proceed to Gate 3
Gate 2 Anti-Rationalization Table
See dev-sre/SKILL.md for complete anti-rationalization tables covering:
- Observability deferral rationalizations
- Instrumentation coverage rationalizations
- Context propagation rationalizations
Gate 2 Pressure Resistance
| User Says | Your Response |
|---|---|
| "Skip SRE validation, we'll add observability later" | "Observability is MANDATORY for Gate 2. Invoking dev-sre skill now." |
| "SRE found issues but let's continue" | "Gate 2 is a HARD GATE. dev-sre skill handles fix dispatch and re-validation." |
| "Instrumentation coverage is low but code works" | "90%+ instrumentation coverage is REQUIRED. dev-sre skill will not pass until met." |
Step 5: Gate 3 - Testing (Per Execution Unit)
REQUIRED SUB-SKILL: Use dev-testing
Step 5.1: Prepare Input for dev-testing Skill
Gather from previous gates:
testing_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
acceptance_criteria: state.current_unit.acceptance_criteria, // list of ACs to test
implementation_files: agent_outputs.implementation.files_changed,
language: state.current_unit.language, // "go" | "typescript" | "python"
// OPTIONAL - additional context
coverage_threshold: 85, // Ring minimum, PROJECT_RULES.md can raise
gate0_handoff: agent_outputs.implementation, // full Gate 0 output
existing_tests: [check for existing test files]
}
Step 5.2: Invoke dev-testing Skill
1. Record gate start timestamp
2. Invoke dev-testing skill with structured input:
Skill("dev-testing") with input:
unit_id: testing_input.unit_id
acceptance_criteria: testing_input.acceptance_criteria
implementation_files: testing_input.implementation_files
language: testing_input.language
coverage_threshold: testing_input.coverage_threshold
gate0_handoff: testing_input.gate0_handoff
existing_tests: testing_input.existing_tests
The skill handles:
- Dispatching qa-analyst agent
- Test creation following TDD methodology
- Coverage measurement and validation (85%+ required)
- Traceability matrix (AC → Test mapping)
- Dispatching fixes to implementation agent if coverage < threshold
- Re-validation loop (max 3 iterations)
3. Parse skill output for results:
Expected output sections:
- "## Testing Summary" → status, iterations
- "## Coverage Report" → threshold vs actual
- "## Traceability Matrix" → AC-to-test mapping
- "## Handoff to Next Gate" → ready_for_review: YES/NO
IF skill output contains "Status: PASS" AND "Ready for Gate 4: YES":
→ Gate 3 PASSED. Proceed to Step 5.3.
IF skill output contains "Status: FAIL" OR "Ready for Gate 4: NO":
→ Gate 3 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran coverage check
→ If "ESCALATION" in output: STOP and report to user
4. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → [state.state_path]
Step 5.3: Gate 3 Complete
5. When dev-testing skill returns PASS:
Parse from skill output:
- coverage_actual: extract percentage from "## Coverage Report"
- coverage_threshold: extract from "## Coverage Report"
- criteria_covered: extract from "## Traceability Matrix"
- iterations: extract from "Iterations:" line
- agent_outputs.testing = {
skill: "dev-testing",
output: "[full skill output]",
verdict: "PASS",
coverage_actual: [X%],
coverage_threshold: [85%],
criteria_covered: "[X/Y]",
iterations: [count],
timestamp: "[ISO timestamp]",
duration_ms: [execution time],
failures: [], // Empty when PASS; see schema below for FAIL
uncovered_criteria: [] // Empty when all ACs covered
}
**If iterations > 1 (tests failed before passing), populate `failures[]`:**
```json
failures: [
{
"test_name": "TestUserCreate_InvalidEmail",
"test_file": "internal/handler/user_test.go",
"error_type": "assertion|panic|timeout|compilation",
"expected": "[expected value]",
"actual": "[actual value]",
"message": "[error message from test output]",
"stack_trace": "[relevant stack trace]",
"fixed_in_iteration": [iteration number when fixed]
}
]
If coverage < 100% of acceptance criteria, populate uncovered_criteria[]:
uncovered_criteria: [
{
"criterion_id": "AC-001",
"description": "User should receive email confirmation",
"reason": "No test found for email sending functionality"
}
]
Update state:
- gate_progress.testing.status = "completed"
- gate_progress.testing.coverage = [coverage_actual]
Proceed to Gate 4
### Gate 3 Thresholds
- **Minimum:** 85% (Ring standard - CANNOT be lowered)
- **Project-specific:** Can be higher if defined in `docs/PROJECT_RULES.md`
- **Validation:** Threshold < 85% → Use 85%
### Gate 3 Pressure Resistance
| User Says | Your Response |
|-----------|---------------|
| "84% is close enough" | "85% is Ring minimum. dev-testing skill enforces this." |
| "Skip testing, deadline" | "Testing is MANDATORY. dev-testing skill handles iterations." |
| "Manual testing covers it" | "Gate 3 requires executable unit tests. Invoking dev-testing now." |
## Step 6: Gate 4 - Review (Per Execution Unit)
**REQUIRED SUB-SKILL:** Use `requesting-code-review`
### Step 6.1: Prepare Input for requesting-code-review Skill
```text
Gather from previous gates:
review_input = {
// REQUIRED - from current execution unit
unit_id: state.current_unit.id,
base_sha: state.current_unit.base_sha, // SHA before implementation
head_sha: [current HEAD], // SHA after all gates
implementation_summary: state.current_unit.title + requirements,
requirements: state.current_unit.acceptance_criteria,
// OPTIONAL - additional context
implementation_files: agent_outputs.implementation.files_changed,
gate0_handoff: agent_outputs.implementation // full Gate 0 output
}
Step 6.2: Invoke requesting-code-review Skill
1. Record gate start timestamp
2. Invoke requesting-code-review skill with structured input:
Skill("requesting-code-review") with input:
unit_id: review_input.unit_id
base_sha: review_input.base_sha
head_sha: review_input.head_sha
implementation_summary: review_input.implementation_summary
requirements: review_input.requirements
implementation_files: review_input.implementation_files
gate0_handoff: review_input.gate0_handoff
The skill handles:
- Dispatching all 3 reviewers in PARALLEL (single message with 3 Task calls)
- Aggregating issues by severity (CRITICAL/HIGH/MEDIUM/LOW/COSMETIC)
- Dispatching fixes to implementation agent for blocking issues
- Re-running ALL 3 reviewers after fixes
- Iteration tracking (max 3 attempts)
- Adding TODO/FIXME comments for non-blocking issues
3. Parse skill output for results:
Expected output sections:
- "## Review Summary" → status, iterations
- "## Issues by Severity" → counts per severity level
- "## Reviewer Verdicts" → code-reviewer, business-logic-reviewer, security-reviewer
- "## Handoff to Next Gate" → ready_for_validation: YES/NO
IF skill output contains "Status: PASS" AND "Ready for Gate 5: YES":
→ Gate 4 PASSED. Proceed to Step 6.3.
IF skill output contains "Status: FAIL" OR "Ready for Gate 5: NO":
→ Gate 4 BLOCKED.
→ Skill already dispatched fixes to implementation agent
→ Skill already re-ran all 3 reviewers
→ If "ESCALATION" in output: STOP and report to user
4. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → [state.state_path]
Step 6.3: Gate 4 Complete
5. When requesting-code-review skill returns PASS:
Parse from skill output:
- reviewers_passed: extract from "## Reviewer Verdicts" (should be "3/3")
- issues_critical: extract count from "## Issues by Severity"
- issues_high: extract count from "## Issues by Severity"
- issues_medium: extract count from "## Issues by Severity"
- iterations: extract from "Iterations:" line
- agent_outputs.review = {
skill: "requesting-code-review",
output: "[full skill output]",
iterations: [count],
timestamp: "[ISO timestamp]",
duration_ms: [execution time],
reviewers_passed: "3/3",
code_reviewer: {
verdict: "PASS",
issues_count: N,
issues: [] // Structured issues - see schema below
},
business_logic_reviewer: {
verdict: "PASS",
issues_count: N,
issues: []
},
security_reviewer: {
verdict: "PASS",
issues_count: N,
issues: []
}
}
**Populate `issues[]` for each reviewer with ALL issues found (even if fixed):**
```json
issues: [
{
"severity": "CRITICAL|HIGH|MEDIUM|LOW|COSMETIC",
"category": "error-handling|security|performance|maintainability|business-logic|...",
"description": "[detailed description of the issue]",
"file": "internal/handler/user.go",
"line": 45,
"code_snippet": "return err",
"suggestion": "Use fmt.Errorf(\"failed to create user: %w\", err)",
"fixed": true|false,
"fixed_in_iteration": [iteration number when fixed, null if not fixed]
}
]
Issue tracking rules:
- ALL issues found across ALL iterations MUST be recorded
fixed: true+fixed_in_iteration: Nfor issues resolved during reviewfixed: false+fixed_in_iteration: nullfor LOW/COSMETIC (TODO/FIXME added)- This enables feedback-loop to analyze recurring issue patterns
Update state:
- gate_progress.review.status = "completed"
- gate_progress.review.reviewers_passed = "3/3"
Proceed to Gate 5
### Gate 4 Anti-Rationalization Table
| Rationalization | Why It's WRONG | Required Action |
|-----------------|----------------|-----------------|
| "Only 1 MEDIUM issue, can proceed" | MEDIUM = MUST FIX. Quantity is irrelevant. | **Fix the issue, re-run all reviewers** |
| "Issue is cosmetic, not really MEDIUM" | Reviewer decided severity. Accept their judgment. | **Fix the issue, re-run all reviewers** |
| "Will fix in next sprint" | Deferred fixes = technical debt = production bugs. | **Fix NOW before Gate 5** |
| "User approved, can skip fix" | User approval ≠ reviewer override. Fixes are mandatory. | **Fix the issue, re-run all reviewers** |
| "Same issue keeps appearing, skip it" | Recurring issue = fix is wrong. Debug properly. | **Root cause analysis, then fix** |
| "Only security reviewer found it" | One reviewer = valid finding. All findings matter. | **Fix the issue, re-run all reviewers** |
| "Iteration limit reached, just proceed" | Limit = escalate, not bypass. Quality is non-negotiable. | **Escalate to user, do NOT proceed** |
| "Tests pass, review issues don't matter" | Tests ≠ review. Different quality dimensions. | **Fix the issue, re-run all reviewers** |
### Gate 4 Pressure Resistance
| User Says | Your Response |
|-----------|---------------|
| "Just skip this MEDIUM issue" | "MEDIUM severity issues are blocking by definition. I MUST dispatch a fix to the appropriate agent before proceeding. This protects code quality." |
| "I'll fix it later, let's continue" | "Gate 4 is a HARD GATE. All CRITICAL/HIGH/MEDIUM issues must be resolved NOW. I'm dispatching the fix to [agent] and will re-run reviewers after." |
| "We're running out of time" | "Proceeding with known issues creates larger problems later. The fix dispatch is automated and typically takes 2-5 minutes. Quality gates exist to save time overall." |
| "Override the gate, I approve" | "User approval cannot override reviewer findings. The gate ensures code quality. I'll dispatch the fix now." |
| "It's just a style issue" | "If it's truly cosmetic, reviewers would mark it COSMETIC (non-blocking). MEDIUM means it affects maintainability or correctness. Fixing now." |
## Step 7: Gate 5 - Validation (Per Execution Unit)
```text
For current execution unit:
1. Record gate start timestamp
2. Verify acceptance criteria:
For each criterion in acceptance_criteria:
- Check if implemented
- Check if tested
- Mark as PASS/FAIL
3. Run final verification:
- All tests pass?
- No Critical/High/Medium review issues?
- All acceptance criteria met?
4. If validation fails:
- Log failure reasons
- Determine which gate to revisit
- Loop back to appropriate gate
5. If validation passes:
- Set unit status = "completed"
- Record gate end timestamp
- agent_outputs.validation = {
result: "approved",
timestamp: "[ISO timestamp]",
criteria_results: [{criterion, status}]
}
- Proceed to Step 7.1 (Execution Unit Approval)
Step 7.1: Execution Unit Approval (Conditional)
Checkpoint depends on execution_mode: manual_per_subtask → Execute | manual_per_task / automatic → Skip
COMMIT CHECK (before checkpoint):
- IF
commit_timing == "per_subtask":- Execute
/commitcommand with message:feat({unit_id}): {unit_title} - Include all changed files from this subtask
- Execute
- ELSE: Skip commit (will happen at task or cycle end)
- IF
Set
status = "paused_for_approval", save statePresent summary: Unit ID, Parent Task, Gates 0-5 status, Criteria X/X, Duration, Files Changed, Commit Status
AskUserQuestion: "Ready to proceed?" Options: (a) Continue (b) Test First (c) Stop Here
Handle response:
| Response | Action |
|---|---|
| Continue | Set in_progress, move to next unit (or Step 7.2 if last) |
| Test First | Set paused_for_testing, STOP, output resume command |
| Stop Here | Set paused, STOP, output resume command |
Step 7.2: Task Approval Checkpoint (Conditional)
Checkpoint depends on execution_mode: manual_per_subtask / manual_per_task → Execute | automatic → Skip
COMMIT CHECK (before task checkpoint):
- IF
commit_timing == "per_task":- Execute
/commitcommand with message:feat({task_id}): {task_title} - Include all changed files from this task (all subtasks combined)
- Execute
- ELSE IF
commit_timing == "per_subtask": Already committed per subtask - ELSE: Skip commit (will happen at cycle end)
- IF
Set task
status = "completed", cyclestatus = "paused_for_task_approval", save statePresent summary: Task ID, Subtasks X/X, Total Duration, Review Iterations, Files Changed, Commit Status
AskUserQuestion: "Task complete. Ready for next?" Options: (a) Continue (b) Integration Test (c) Stop Here
Handle response:
After completing all subtasks of a task:
0. Check execution_mode from state:
- If "automatic": Still run feedback, then skip to next task
- If "manual_per_subtask" OR "manual_per_task": Continue with checkpoint below
1. Set task status = "completed"
2. **⛔ MANDATORY: Run dev-feedback-loop skill**
```yaml
Skill tool:
skill: "dev-feedback-loop"
Note: dev-feedback-loop manages its own TodoWrite tracking internally.
The skill will:
- Add its own todo item for tracking
- Calculate assertiveness score for the task
- Dispatch prompt-quality-reviewer agent with agent_outputs from state
- Generate improvement suggestions
- Write feedback to docs/feedbacks/cycle-{date}/{agent}.md
- Mark its todo as completed
After feedback-loop completes, update state:
- Set
tasks[current].feedback_loop_completed = truein state file
Anti-Rationalization for Feedback Loop:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Task was simple, skip feedback" | Simple tasks still contribute to patterns | Execute Skill tool |
| "Already at 100% score" | High scores need tracking for replication | Execute Skill tool |
| "User approved, feedback unnecessary" | Approval ≠ process quality metrics | Execute Skill tool |
| "No issues found, nothing to report" | Absence of issues IS data | Execute Skill tool |
| "Time pressure, skip metrics" | Metrics take <2 min, prevent future issues | Execute Skill tool |
⛔ HARD GATE: You CANNOT proceed to step 3 without executing the Skill tool above.
Set cycle status = "paused_for_task_approval"
Save state
Present task completion summary (with feedback metrics): ┌─────────────────────────────────────────────────┐ │ ✓ TASK COMPLETED │ ├─────────────────────────────────────────────────┤ │ Task: [task_id] - [task_title] │ │ │ │ Subtasks Completed: X/X │ │ ✓ ST-001-01: [title] │ │ ✓ ST-001-02: [title] │ │ ✓ ST-001-03: [title] │ │ │ │ Total Duration: Xh Xm │ │ Total Review Iterations: N │ │ │ │ ═══════════════════════════════════════════════ │ │ FEEDBACK METRICS │ │ ═══════════════════════════════════════════════ │ │ │ │ Assertiveness Score: XX% (Rating) │ │ │ │ Prompt Quality by Agent: │ │ backend-engineer-golang: 90% (Excellent) │ │ qa-analyst: 75% (Acceptable) │ │ code-reviewer: 88% (Good) │ │ │ │ Improvements Suggested: N │ │ Feedback Location: │ │ docs/feedbacks/cycle-YYYY-MM-DD/ │ │ │ │ ═══════════════════════════════════════════════ │ │ │ │ All Files Changed This Task: │ │ - file1.go │ │ - file2.go │ │ - ... │ │ │ │ Next Task: [next_task_id] - [next_task_title] │ │ Subtasks: N (or "TDD autonomous") │ │ OR "No more tasks - cycle complete" │ └─────────────────────────────────────────────────┘
ASK FOR EXPLICIT APPROVAL using AskUserQuestion tool:
Question: "Task [task_id] complete. Ready to start the next task?" Options: a) "Continue" - Proceed to next task b) "Integration Test" - User wants to test the full task integration c) "Stop Here" - Pause cycle
Handle user response:
If "Continue":
- Set status = "in_progress"
- Move to next task
- Set current_task_index += 1
- Set current_subtask_index = 0
- Reset to Gate 0
- Continue execution
If "Integration Test":
- Set status = "paused_for_integration_testing"
- Save state
- Output: "Cycle paused for integration testing. Test task [task_id] integration and run: /dev-cycle --resume when ready to continue."
- STOP execution
If "Stop Here":
- Set status = "paused"
- Save state
- Output: "Cycle paused after task [task_id]. Resume with: /dev-cycle --resume"
- STOP execution
**Note:** Tasks without subtasks execute both 7.1 and 7.2 in sequence.
## Step 8: Cycle Completion
0. **FINAL COMMIT CHECK (before completion):**
- IF `commit_timing == "at_end"`:
- Execute `/commit` command with message: `feat({cycle_id}): complete dev cycle for {feature_name}`
- Include all changed files from the entire cycle
- ELSE: Already committed per subtask or per task
1. **Calculate metrics:** total_duration_ms, average gate durations, review iterations, pass/fail ratio
2. **Update state:** `status = "completed"`, `completed_at = timestamp`
3. **Generate report:** Task | Subtasks | Duration | Review Iterations | Status | Commit Status
4. **⛔ MANDATORY: Run dev-feedback-loop skill for cycle metrics**
```yaml
Skill tool:
skill: "dev-feedback-loop"
Note: dev-feedback-loop manages its own TodoWrite tracking internally.
After feedback-loop completes, update state:
- Set
feedback_loop_completed = trueat cycle level in state file
⛔ HARD GATE: Cycle is NOT complete until feedback-loop executes.
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Cycle done, feedback is extra" | Feedback IS part of cycle completion | Execute Skill tool |
| "Will run feedback next session" | Next session = never. Run NOW. | Execute Skill tool |
| "All tasks passed, no insights" | Pass patterns need documentation too | Execute Skill tool |
- Report: "Cycle completed. Tasks X/X, Subtasks Y, Time Xh Xm, Review iterations X"
Quick Commands
# Full PM workflow then dev execution
/pre-dev-full my-feature
/dev-cycle docs/pre-dev/my-feature/
# Simple PM workflow then dev execution
/pre-dev-feature my-feature
/dev-cycle docs/pre-dev/my-feature/tasks.md
# Manual task file
/dev-cycle docs/tasks/sprint-001.md
# Resume interrupted cycle
/dev-cycle --resume
Error Recovery
| Type | Condition | Action |
|---|---|---|
| Recoverable | Network timeout | Retry with exponential backoff |
| Recoverable | Agent failure | Retry once, then pause for user |
| Recoverable | Test flakiness | Re-run tests up to 2 times |
| Non-Recoverable | Missing required files | Stop and report |
| Non-Recoverable | Invalid state file | Must restart (cannot resume) |
| Non-Recoverable | Max review iterations | Pause for user |
On any error: Update state → Set status (failed/paused) → Save immediately → Report (what failed, why, how to recover, resume command)
Execution Report
Base metrics per shared-patterns/output-execution-report.md.
| Metric | Value |
|---|---|
| Duration | Xh Xm Ys |
| Tasks Processed | N/M |
| Current Gate | Gate X - [name] |
| Review Iterations | N |
| Result | PASS/FAIL/IN_PROGRESS |
Gate Timings
| Gate | Duration | Status |
|---|---|---|
| Implementation | Xm Ys | in_progress |
| DevOps | - | pending |
| SRE | - | pending |
| Testing | - | pending |
| Review | - | pending |
| Validation | - | pending |
State File Location
docs/dev-cycle/current-cycle.json (feature) or docs/dev-refactor/current-cycle.json (refactor)