| name | TDD Process |
| description | Strict test-driven development state machine with red-green-refactor cycles. Enforces test-first development, meaningful failures, minimum implementations, and full verification. Activates when user requests: 'use a TDD approach', 'start TDD', 'test-drive this'. |
| version | 1.0.0 |
In Plan Mode: Plans should be test specifications, not implementation designs. Include key insights, architectural constraints, and suggestionsβbut never the full implementation of production code.
π¨ CRITICAL: TDD STATE MACHINE GOVERNANCE π¨
EVERY SINGLE MESSAGE MUST START WITH YOUR CURRENT TDD STATE
Format:
π΄ TDD: RED
π’ TDD: GREEN
π΅ TDD: REFACTOR
βͺ TDD: PLANNING
π‘ TDD: VERIFY
β οΈ TDD: BLOCKED
NOT JUST THE FIRST MESSAGE. EVERY. SINGLE. MESSAGE.
When you read a file β prefix with TDD state When you run tests β prefix with TDD state When you explain results β prefix with TDD state When you ask a question β prefix with TDD state
Example:
βͺ TDD: PLANNING
Writing test for negative price validation...
βͺ TDD: PLANNING
Running npm test to see it fail...
βͺ TDD: PLANNING
Test output shows: Expected CannotHaveNegativePrice error but received -50
Test fails correctly. Transitioning to RED.
π΄ TDD: RED
Test IS failing. Addressing what the error message demands...
π¨ FAILURE TO ANNOUNCE TDD STATE = SEVERE VIOLATION π¨
- CANNOT skip states or assume completion without evidence
- MUST announce state on EVERY message
- MUST validate post-conditions before transitioning
Before each response: Verify your claimed state matches your tool call evidence.
If mismatch: π₯ STATE VIOLATION DETECTED β announce correct state β recover.
State announcement: Every message starts with π΄ TDD: RED (or current state).
Forgot prefix? Announce violation immediately, then continue.
<pre_conditions>
β User has provided a task/requirement/bug report
β No other TDD cycle in progress
</pre_conditions>
<actions>
1. Analyze requirement/bug
2. Ask clarifying questions if needed
3. Determine what behavior needs testing
4. Identify edge cases
5. Write test for specific behavior
6. Run test (use Bash tool to execute test command)
7. VERIFY test fails correctly
8. Show exact failure message to user (copy/paste verbatim output)
9. Justify why failure message proves test is correct
10. If failure is "method doesn't exist" - implement empty/dummy method and re-run from step 6
11. Repeat until you get a "meaningful" failure
12. Improve the code to produce a more explicit error message. Does the test failure provide a precise reason for the failure, if not ask the user if they want to make it better.
13. Transition to RED
</actions>
<post_conditions>
β Test written and executed
β Test FAILED correctly (red bar achieved)
β Failure message shown to user verbatim
β Failure reason justified (proves test is correct)
β Failure is "meaningful" (not setup/syntax error)
</post_conditions>
<validation_before_transition>
BEFORE transitioning to RED, announce:
"Pre-transition validation:
β Test written: [yes]
β Test executed: [yes]
β Test failed correctly: [yes]
β Failure message shown: [yes - output above]
β Meaningful failure: [yes - justification]
Transitioning to RED - test is now failing for the right reason."
</validation_before_transition>
<transitions>
- PLANNING β RED (when test fails correctly - red milestone achieved)
- PLANNING β BLOCKED (when cannot write valid test)
</transitions>
</state>
<state name="RED">
<prefix>π΄ TDD: RED</prefix>
<purpose>Test IS failing for the right reason. Implement ONLY what the error message demands.</purpose>
π¨ CRITICAL: You are in RED state - test IS CURRENTLY FAILING. You MUST implement code and see test PASS, code COMPILE, code LINT before transitioning to GREEN.
DO NOT transition to GREEN until you have:
1. Implemented ONLY what the error message demands
2. Executed the test with Bash tool
3. Seen the SUCCESS output (green bar)
4. Executed compile check and seen SUCCESS
5. Executed lint check and seen PASS
6. Shown all success outputs to the user
<pre_conditions>
β Test written and executed (from PLANNING)
β Test IS FAILING correctly (red bar visible)
β Failure message shown and justified
β Failure is "meaningful" (not setup/syntax error)
</pre_conditions>
<actions>
1. Read the error message - what does it literally ask for?
2. π¨ MANDATORY SELF-CHECK - announce before implementing:
"Minimal implementation check:
- Error demands: [what the error literally says]
- Could hardcoded value work? [yes/no]
- If yes: [what hardcoded value]
- If no: [why real logic is required]"
Guidelines:
- If test asserts `x === 5` β return `5`
- If test asserts `count === 0` β return object with `count: 0`
- If test asserts type β return minimal stub of that type
- Only add logic when tests FORCE you to (multiple cases, different inputs)
3. Implement ONLY what that error message demands (hardcoded if possible)
4. Do NOT anticipate future errors - address THIS error only
5. Run test (use Bash tool to execute test command)
6. VERIFY test PASSES (green bar)
7. Show exact success message to user (copy/paste verbatim output)
8. Run quick compilation check (e.g., tsc --noEmit, or project-specific compile command)
9. Run lint on changed code
10. If compile/lint fails: Fix issues and return to step 5 (re-run test)
11. Show compile/lint success output to user
12. Justify why implementation is minimum
13. ONLY AFTER completing steps 5-12: Announce post-condition validation
14. ONLY AFTER validation passes: Transition to GREEN
π¨ YOU CANNOT TRANSITION TO GREEN UNTIL TEST PASSES, CODE COMPILES, AND CODE LINTS π¨
</actions>
<post_conditions>
β Implemented ONLY what error message demanded
β Test executed
β Test PASSES (green bar - not red)
β Success message shown to user verbatim
β Code compiles (no compilation errors)
β Code lints (no linting errors)
β Compile/lint output shown to user
β Implementation addresses ONLY what error message demanded (justified)
</post_conditions>
<validation_before_transition>
π¨ BEFORE transitioning to GREEN, verify ALL with evidence from tool history:
β Test PASSES (green bar) - show verbatim output
β Code compiles - show output
β Code lints - show output
β Implementation addresses ONLY what error demanded - justify
If ANY evidence missing: "β οΈ CANNOT TRANSITION - Missing: [what]" β stay in RED.
</validation_before_transition>
<critical_rules>
π¨ NEVER transition to GREEN without test PASS + compile SUCCESS + lint PASS
π¨ IMPLEMENT ONLY WHAT THE ERROR MESSAGE DEMANDS - no anticipating future errors
π¨ DON'T CHANGE TEST TO MATCH IMPLEMENTATION - fix the code, not the test
</critical_rules>
<transitions>
- RED β GREEN (when test PASSES, code COMPILES, code LINTS - green milestone achieved)
- RED β BLOCKED (when cannot make test pass or resolve compile/lint errors)
- RED β PLANNING (when test failure reveals requirement was misunderstood)
</transitions>
</state>
<state name="GREEN">
<prefix>π’ TDD: GREEN</prefix>
<purpose>Test IS passing for the right reason. Assess code quality and decide next step.</purpose>
<pre_conditions>
β Test exists and PASSES (from RED)
β Test IS PASSING for the right reason (green bar visible)
β Code compiles (no compilation errors)
β Code lints (no linting errors)
β Pass output was shown and implementation justified as minimum
</pre_conditions>
<actions>
1. Review the implementation that made test pass
2. Check code quality against object calisthenics
3. Check for feature envy
4. Check for dependency inversion opportunities
5. Check naming conventions
6. Decide: Does code need refactoring?
7a. If YES refactoring needed β Transition to REFACTOR
7b. If NO refactoring needed β Transition to VERIFY
</actions>
<post_conditions>
β Test IS PASSING (green bar)
β Code quality assessed
β Decision made: refactor or verify
</post_conditions>
<validation_before_transition>
BEFORE transitioning to REFACTOR or VERIFY, announce:
"Post-condition validation:
β Test IS PASSING: [yes - green bar visible]
β Code quality assessed: [yes]
β Decision: [REFACTOR needed / NO refactoring needed, go to VERIFY]
All post-conditions satisfied. Transitioning to [REFACTOR/VERIFY]."
IF any post-condition NOT satisfied:
"β οΈ CANNOT TRANSITION - Post-condition failed: [which one]
Staying in GREEN state to address: [issue]"
</validation_before_transition>
<critical_rules>
π¨ GREEN state means test IS PASSING, code COMPILES, code LINTS - if any fail, you're back to RED
π¨ NEVER skip code quality assessment
π¨ NEVER transition if test is not passing
π¨ NEVER transition if code doesn't compile or lint
π¨ ALWAYS assess whether refactoring is needed
π¨ Go to REFACTOR if improvements needed, VERIFY if code is already clean
</critical_rules>
<transitions>
- GREEN β REFACTOR (when refactoring needed - improvements identified)
- GREEN β VERIFY (when code quality satisfactory - no refactoring needed)
- GREEN β RED (if test starts failing - regression detected, need new failing test)
</transitions>
</state>
<state name="REFACTOR">
<prefix>π΅ TDD: REFACTOR</prefix>
<purpose>Tests ARE passing. Improving code quality while maintaining green bar.</purpose>
<pre_conditions>
β Tests ARE PASSING (from GREEN)
β Code compiles (no compilation errors)
β Code lints (no linting errors)
β Refactoring needs identified
β Pass output was shown
</pre_conditions>
<actions>
1. Analyze code for design improvements
2. Check against project conventions
3. Check naming conventions
4. If improvements needed:
a. Explain refactoring
b. Apply refactoring
c. Run test to verify behavior preserved
d. Show test still passes
5. Repeat until no more improvements
6. Check tests for improvements opportunities - e.g. combine tests with it.each, check against project testing conventions (e.g. `/docs/conventions/testing.md`)
7. Transition to VERIFY
</actions>
<post_conditions>
β Code reviewed for quality
β Object calisthenics applied
β No feature envy
β Dependencies inverted
β Names are intention-revealing
β Tests still pass after each refactor
β Test output shown after each refactor
</post_conditions>
<validation_before_transition>
BEFORE transitioning to VERIFY, announce:
"Post-condition validation:
β Object calisthenics: [applied/verified]
β Feature envy: [none detected]
β Dependencies: [properly inverted]
β Naming: [intention-revealing]
β Tests pass: [yes - output shown]
All post-conditions satisfied. Transitioning to VERIFY."
</validation_before_transition>
<critical_rules>
π¨ NEVER refactor without running tests after
π¨ NEVER use generic names (data, utils, helpers)
π¨ ALWAYS verify tests pass after refactor
</critical_rules>
<transitions>
- REFACTOR β VERIFY (when code quality satisfactory)
- REFACTOR β RED (if refactor broke test - write new test for edge case)
- REFACTOR β BLOCKED (if cannot refactor due to constraints)
</transitions>
</state>
<state name="VERIFY">
<prefix>π‘ TDD: VERIFY</prefix>
<purpose>Tests ARE passing. Run full test suite + lint + build before claiming complete.</purpose>
<pre_conditions>
β Tests ARE PASSING (from GREEN or REFACTOR)
β Code compiles (no compilation errors)
β Code lints (no linting errors)
β Either: Refactoring complete OR no refactoring needed
</pre_conditions>
<actions>
1. Run full test suite (not just current test)
2. Capture and show output
3. Run lint
4. Capture and show output
5. Run build
6. Capture and show output
7. If ALL pass β Transition to COMPLETE
8. If ANY fail β Transition to BLOCKED or RED
</actions>
<post_conditions>
β Full test suite executed
β All tests PASSED
β Test output shown
β Lint executed
β Lint PASSED
β Lint output shown
β Build executed
β Build SUCCEEDED
β Build output shown
</post_conditions>
<validation_before_completion>
BEFORE claiming COMPLETE, announce:
"Final validation:
β Full test suite: [X/X tests passed - output shown]
β Lint: [passed - output shown]
β Build: [succeeded - output shown]
All validation passed. TDD cycle COMPLETE.
Session Summary:
- Tests written: [count]
- Refactorings: [count]
- Violations: [count]
- Duration: [time]"
IF any validation FAILED:
"β οΈ VERIFICATION FAILED
Failed check: [which one]
Output: [failure message]
Routing to: [RED/BLOCKED depending on issue]"
</validation_before_completion>
<critical_rules>
π¨ NEVER claim complete without full test suite
π¨ NEVER claim complete without lint passing
π¨ NEVER claim complete without build passing
π¨ ALWAYS show output of each verification
π¨ NEVER skip verification steps
</critical_rules>
<transitions>
- VERIFY β COMPLETE (when all checks pass)
- VERIFY β RED (when tests fail - regression detected)
- VERIFY β REFACTOR (when lint fails - code quality issue)
- VERIFY β BLOCKED (when build fails - structural issue)
</transitions>
</state>
<state name="BLOCKED">
<prefix>β οΈ TDD: BLOCKED</prefix>
<purpose>Handle situations where progress cannot continue</purpose>
<pre_conditions>
β Encountered issue preventing progress
β Issue is not user error or misunderstanding
</pre_conditions>
<actions>
1. Clearly explain blocking issue
2. Explain which state you were in
3. Explain what you were trying to do
4. Explain why you cannot proceed
5. Suggest possible resolutions
6. STOP and wait for user guidance
</actions>
<post_conditions>
β Blocker documented
β Context preserved
β Suggestions provided
β Waiting for user
</post_conditions>
<critical_rules>
π¨ NEVER improvise workarounds
π¨ NEVER skip steps to "unblock" yourself
π¨ ALWAYS stop and wait for user
</critical_rules>
<transitions>
- BLOCKED β [any state] (based on user guidance)
</transitions>
</state>
<state name="VIOLATION_DETECTED">
<prefix>π₯ TDD: VIOLATION_DETECTED</prefix>
<purpose>Handle state machine violations</purpose>
<trigger>
Self-detected violations:
- Forgot state announcement
- Skipped state
- Failed to validate post-conditions
- Claimed phase complete without evidence
- Skipped test execution
- Changed assertion when test failed
- Changed test assertion to match implementation (instead of fixing implementation)
- Implemented full solution when hardcoded value would satisfy error
- Skipped mandatory self-check before implementing
</trigger>
<actions>
1. IMMEDIATELY announce: "π₯ STATE VIOLATION DETECTED"
2. Explain which rule/state was violated
3. Explain what you did wrong
4. Announce correct current state
5. Ask user permission to recover
6. If approved, return to correct state
</actions>
<example>
"π₯ STATE VIOLATION DETECTED
Violation: Forgot to announce state on previous message
Current actual state: RED
Recovering to correct state...
π΄ TDD: RED
[continue from here]"
</example>
</state>
Example techniques:
- Return wrong value (null, 0, "") to get meaningful assertion failure
- Hardcode expected value to pass, then add more tests to force real logic
- Never jump from "not implemented" to full solution
Concrete example:
- Error: "Not implemented"
- Test expects: componentCount: 0, linkCount: 0
- β WRONG: Implement real resume() with graph parsing logic
- β
RIGHT: Return hardcoded stub `{ componentCount: 0, linkCount: 0 }`
- Then: Add more tests to force real implementation