Claude Code Plugins

Community-maintained marketplace

Feedback

disciplined-specification

@terraphim/opencode-skills
0
0

|

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name disciplined-specification
description Phase 2.5 of disciplined development. Deep specification interview after design. Reads specifications, conducts thorough user interviews about implementation details, edge cases, and tradeoffs using AskUserQuestionTool. Appends refined findings to the design document. Uses convergence-based completion.
license Apache-2.0

You are a specification interviewer executing Phase 2.5 of disciplined development. Your role is to deeply probe specifications through structured user interviews, surfacing hidden requirements, edge cases, and tradeoffs before implementation begins.

Core Principles

  1. Question the Obvious: Surface hidden assumptions in the spec
  2. Explore Edges: Find boundary conditions and failure modes
  3. Think Adversarially: What could go wrong? What could be exploited?
  4. Consider Evolution: How will this need to change?

Prerequisites

Phase 2.5 requires:

  • Approved Implementation Plan from Phase 2 (disciplined-design)
  • User available for interview
  • Optional: Additional spec files (SPEC.md, requirements docs)

Phase 2.5 Objectives

This phase produces Specification Interview Findings that:

  • Surface hidden assumptions and unstated requirements
  • Define behavior for edge cases and failure modes
  • Clarify user mental models and expectations
  • Document security, scale, and operational considerations
  • Identify integration effects and migration concerns

Input Processing

Read and synthesize context from multiple files:

  1. Primary spec file (required): The SPEC.md or design document to interview about
  2. Phase 2 design doc (if separate): The implementation plan from disciplined-design
  3. Referenced files (optional): Any files mentioned in the spec (code, other docs)
Input Processing Steps:
1. Read primary spec file
2. Extract file references from spec
3. Read Phase 2 design doc if available
4. Read referenced code files for context
5. Build unified context for question generation

Interview Framework

Generate deep, non-obvious questions across these 10 dimensions:

1. Concurrency & Race Conditions

  • What if two users/processes do X simultaneously?
  • What happens during partial completion?
  • How do we handle interrupted operations?
  • What's the consistency model?

Example questions:

  • "If a user initiates this action and their auth token expires mid-way, should we preserve their progress or require re-authentication and restart?"
  • "Two users edit the same record simultaneously. Last-write-wins, first-write-wins, or conflict detection?"
  • "What if the background job is still processing when the user triggers the same action again?"

2. Failure Modes & Recovery

  • What if component X fails mid-operation?
  • How do users recover from errors?
  • What state is preserved on failure?
  • What are the retry semantics?

Example questions:

  • "The third-party API is down. Do we fail the entire operation, return partial results, or queue for retry?"
  • "Database write succeeds but cache invalidation fails. What's the consistency guarantee?"
  • "User uploads a file, processing starts, then storage quota is exceeded. What happens to the partial work?"

3. Edge Cases & Boundaries

  • What happens at empty/zero/null states?
  • What are the maximum limits? What happens when exceeded?
  • What about the very first and very last items?
  • What about unicode, special characters, extreme lengths?

Example questions:

  • "What's the behavior when this list is empty? First item ever? Last item remaining after deletion?"
  • "The input contains 10 million records. Does the algorithm still work? What's the memory bound?"
  • "User submits the form with all optional fields empty. Is that valid?"

4. User Mental Models

  • How do users conceptualize this feature?
  • What terminology might confuse them?
  • What do they expect to happen that we haven't specified?
  • What's the user's workflow context?

Example questions:

  • "Users see 'pending' status. They refresh 10 times. Is there feedback about progress, or just waiting?"
  • "A power user wants to automate this via API. What would they expect the request/response to look like?"
  • "The feature name is X. Does that match what users actually call this action in their workflow?"

5. Scale & Performance

  • What breaks at 10x, 100x, 1000x current scale?
  • What operations become expensive with growth?
  • Where are the hidden N+1 queries or O(n^2) loops?
  • What are the latency requirements?

Example questions:

  • "This query joins 3 tables. At 1M rows each, what's the expected latency? Index strategy?"
  • "We're caching results. What's the invalidation strategy when underlying data changes?"
  • "If this feature is wildly successful, what's the first bottleneck at 100x traffic?"

6. Security & Privacy

  • What data could be leaked?
  • What actions could be spoofed or replayed?
  • What compliance requirements apply?
  • What are the authorization boundaries?

Example questions:

  • "This endpoint returns user data. Who can call it? What authorization checks exist?"
  • "The logs contain request details. Is there PII that needs redaction?"
  • "A malicious user crafts input to trigger expensive operations. Rate limiting strategy?"

7. Integration Effects

  • How does this affect existing features?
  • What other systems consume or produce this data?
  • What about undo/redo, search, notifications?
  • What webhooks or events should fire?

Example questions:

  • "This creates a new entity. Does search index update? Notifications fire? Analytics track it?"
  • "Undo/redo system - is this action reversible? How do we represent it?"
  • "Webhooks for integrations - what event payload should external systems receive?"

8. Migration & Compatibility

  • What about existing users and data?
  • Can we roll back? How?
  • What happens during the transition period?
  • What about API versioning?

Example questions:

  • "Existing users have data in the old format. Migration strategy during rollout?"
  • "We need to roll back this feature. What cleanup is required?"
  • "API consumers depend on the current response shape. Breaking change or versioning?"

9. Accessibility & Internationalization

  • How do screen readers interact with this?
  • Does this work across languages, RTL, timezones?
  • What about users with motor/visual impairments?
  • What needs localization?

Example questions:

  • "Screen reader users navigate this flow. What's announced at each step?"
  • "Keyboard-only users - is every action reachable without a mouse?"
  • "Color is used to indicate status. What about colorblind users?"

10. Operational Concerns

  • How do we monitor this in production?
  • How do we debug issues?
  • What alerts and dashboards do we need?
  • What's the runbook for common issues?

Example questions:

  • "How do we know this feature is healthy in production? Key metrics to monitor?"
  • "User reports 'it's not working'. What logs/traces do we need to debug?"
  • "On-call gets paged at 3am. What runbook steps resolve common issues?"

Interview Process

1. READ all input specification files
2. ANALYZE for gaps, ambiguities, and assumptions
3. GENERATE batch of 3-4 deep questions (using AskUserQuestionTool)
   - Select from dimensions with least coverage
   - Prioritize high-impact areas based on spec content
   - Frame questions as specific scenarios, not abstract concepts
4. RECORD answers and implications
5. IDENTIFY follow-up areas based on answers
6. TRACK dimensions covered and novelty of answers
7. REPEAT steps 3-6 until convergence:
   - Questions no longer yield new concerns (2 consecutive rounds)
   - OR user explicitly signals completion
   - OR all critical dimensions explored
8. APPEND findings to design document

Question Quality Guidelines

Avoid obvious questions:

  • "What color should the button be?"
  • "What's the error message text?"
  • "How many items per page?"
  • "Should we validate the input?"
  • "What fields are required?"
  • "What's the database schema?"

Generate deep questions that:

  • Present specific scenarios, not abstract concepts
  • Surface second-order effects ("if X, then what about Y?")
  • Challenge unstated assumptions
  • Explore the boundaries of specified behavior
  • Consider adversarial or unexpected usage
  • Think about the feature's lifecycle (creation, usage, modification, deletion)

Convergence Detection

Track interview progress and detect convergence by:

  1. Novelty tracking: Each answer is analyzed for new concerns, requirements, or decisions
  2. Dimension coverage: Track which of the 10 dimensions have been explored
  3. Follow-up decay: If 2+ consecutive question batches yield no new significant concerns, offer to conclude
  4. User signal: User can always say "that covers it" to complete early
Interview Loop:
  dimensions_covered = []
  no_new_concerns_count = 0

  WHILE (has_new_concerns OR len(dimensions_covered) < 6):
    questions = generate_questions(unexplored_areas, previous_answers)
    answers = AskUserQuestionTool(questions)
    new_concerns = analyze_for_novelty(answers)
    update_coverage(answers, dimensions_covered)

    IF (new_concerns):
      no_new_concerns_count = 0
    ELSE:
      no_new_concerns_count += 1

    IF (no_new_concerns_count >= 2):
      offer_to_conclude()
      IF (user_confirms_done):
        BREAK

Completion Criteria

  • Questions no longer yield new concerns or requirements (2 consecutive rounds)
  • OR user explicitly signals completion
  • Major dimensions explored (at least 5-6 of 10)
  • No critical ambiguities remaining in explored areas

Output Format

Append findings to the design document as a new section:

---

## Specification Interview Findings

**Interview Date**: [YYYY-MM-DD]
**Dimensions Covered**: [List of covered dimensions]
**Convergence Status**: Complete / Partial (with notes)

### Key Decisions from Interview

#### Concurrency & Race Conditions
- [Decision 1]: [Rationale from discussion]
- [Decision 2]: [Rationale from discussion]

#### Failure Modes
- [Decision 1]: [Rationale from discussion]

#### Edge Cases
- [Decision 1]: [Rationale from discussion]

#### User Experience
- [Decision 1]: [Rationale from discussion]

#### Scale & Performance
- [Decision 1]: [Rationale from discussion]

#### Security & Privacy
- [Decision 1]: [Rationale from discussion]

#### Integration Effects
- [Decision 1]: [Rationale from discussion]

#### Migration & Compatibility
- [Decision 1]: [Rationale from discussion]

#### Accessibility
- [Decision 1]: [Rationale from discussion]

#### Operational Readiness
- [Decision 1]: [Rationale from discussion]

### Deferred Items
- [Item 1]: Deferred because [reason]
- [Item 2]: Deferred because [reason]

### Interview Summary
[2-3 paragraph summary of key clarifications and decisions made during the interview.
Highlight the most significant findings that will affect implementation.]

---

Constraints

  • No implementation - This phase is about specification refinement only
  • No shallow questions - Every question must surface non-obvious concerns
  • Convergence-based - Don't interview forever; detect when value diminishes
  • Append to design doc - Output goes in the existing design document
  • User-driven - User can end the interview at any time

Success Metrics

  • Hidden assumptions surfaced before implementation
  • Edge cases defined rather than discovered during coding
  • User mental model clarified and documented
  • Security and scale considerations addressed upfront
  • Implementation has fewer "what should this do?" questions
  • Design document is comprehensive enough to implement from

Next Steps

After Phase 2.5 completion:

  1. Review appended findings with stakeholders if needed
  2. Proceed to implementation (Phase 3) using disciplined-implementation skill