Claude Code Plugins

Community-maintained marketplace

Feedback

Interviews users to build comprehensive project specifications. Use when starting a new project, feature, or when user needs help defining requirements through guided discovery.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name spec-interview
description Interviews users to build comprehensive project specifications. Use when starting a new project, feature, or when user needs help defining requirements through guided discovery.
Build complete, production-ready specifications through deep, systematic interviewing. Read existing SPEC.md if present, then probe the user about every aspect they haven't fully thought through: architecture decisions, edge cases, error states, UX flows, security concerns, performance tradeoffs, integration points, and failure modes.

The goal is to surface hidden assumptions and force decisions BEFORE implementation begins.

Never ask questions the user has already answered or could trivially answer. Dig into: - What happens when X fails? - How does this interact with Y? - What's the migration path from current state? - Who's responsible when Z goes wrong? - What does success look like in 6 months? Start broad, then drill into areas of uncertainty. When user gives vague answers, probe deeper. When they're confident, move on. Detect hesitation and explore it. Don't let users have everything. Force explicit tradeoffs: - "You mentioned both X and Y. These conflict because... which matters more?" - "This approach optimizes for A but sacrifices B. Is that acceptable?" Continue interviewing until EVERY section of the spec template has concrete answers. Vague sections = more questions. Only stop when the spec is implementation-ready. 1. Check if SPEC.md exists and read it 2. Identify gaps, ambiguities, and untested assumptions 3. Begin interviewing using AskUserQuestionTool 4. Cover ALL domains systematically (see question_domains) 5. Write completed spec to SPEC.md **Read existing spec if present:** ``` Read SPEC.md (or specified file path) ```

If exists: Analyze what's defined vs. what's missing or vague. If not: Start fresh, but ask about existing context (related systems, constraints, prior art).

Before diving deep, establish scope with 2-3 broad questions: - What problem are you solving and for whom? - What's the minimal viable version vs. the full vision? - What constraints exist (time, tech stack, team, budget)? **Interview through ALL domains below.** Use AskUserQuestionTool with 2-4 targeted questions per round. Mix domains to keep conversation dynamic.

CRITICAL: Each question must:

  • Be specific to THIS project (not generic)
  • Surface a decision or assumption
  • Have meaningful, distinct options
  • Force the user to commit to something
After each answer round, identify: - New questions raised by the answer - Contradictions with earlier answers - Areas where user seemed uncertain

Probe these immediately before moving on.

When all domains are covered and no ambiguities remain: 1. Use the template in `templates/spec-template.md` 2. Fill every section with concrete decisions 3. Mark any remaining open questions explicitly 4. Write to SPEC.md (or user-specified path)
**Surface hidden assumptions about the problem:** - What's the actual pain point? (not the solution they've imagined) - Who are the real users? (roles, technical level, frequency of use) - What do users do TODAY without this? (current workarounds) - What would make users NOT use this? (adoption blockers) - How will you know if this succeeded? (measurable outcomes) **Force explicit scope decisions:** - What's explicitly OUT of scope? (as important as what's in) - What's the MVP vs. v2 vs. "nice to have someday"? - What adjacent problems are you intentionally NOT solving? - What happens if scope must be cut by 50%? What survives? **Probe technical decisions and their implications:** - What's the data model? What are the core entities and relationships? - Where does state live? (client, server, database, cache) - What's the source of truth for X? (when there's duplication) - How does this scale? (10x users, 100x data, distributed team) - What's the deployment model? (self-hosted, SaaS, hybrid) - What's the upgrade/migration path from v1 to v2? - What technical debt are you knowingly taking on? **Map the system boundaries:** - What external systems does this touch? (APIs, databases, services) - What happens when dependency X is down? - Who owns the integration contracts? How do they change? - What data flows in and out? What's the format/protocol? - Are there rate limits, quotas, or cost implications? **Get specific about user experience:** - Walk through the primary user journey step-by-step - What's the first thing a new user sees/does? - How does the user recover from mistakes? - What feedback does the user get at each step? - What's the mobile/responsive story? - What accessibility requirements exist? - How does this look with 0 items? 1 item? 1000 items? **Surface failure modes:** - What happens when network fails mid-operation? - What if the user does X twice rapidly? - What if data is malformed or missing fields? - What's the worst thing that could happen? How do we prevent it? - What does partial failure look like? (3 of 5 items succeed) - How do users know something went wrong? - What's the retry/recovery mechanism? **Force security decisions:** - What data is sensitive? What's the classification? - Who can see/edit/delete what? (permission model) - How is authentication handled? (existing system? new?) - What audit trail is required? - What compliance requirements exist? (GDPR, SOC2, HIPAA) - What happens to data when user/account is deleted? **Establish non-functional requirements:** - What response times are acceptable? (p50, p95, p99) - What's the availability target? (99%, 99.9%, 99.99%) - What's the expected load? (requests/sec, concurrent users) - What happens under load? (graceful degradation vs. hard failure) - What's the data retention policy? - What's the backup/recovery strategy? **Think about day 2:** - How will this be monitored? What alerts exist? - How do you debug when something goes wrong? - What does deployment look like? (CI/CD, manual, hybrid) - Who's on-call? What's the escalation path? - How is configuration managed? (env vars, config files, admin UI) - What's the rollback plan? **Define quality gates:** - What must be tested? (unit, integration, e2e) - What's the test data strategy? - How do you test integrations with external systems? - What's the acceptance criteria for "done"? - Who approves releases? **Reality-check the plan:** - What's driving the timeline? (hard deadline, soft goal, ASAP) - What's the phased rollout plan? - What's the feature flag strategy? - What can be parallelized? What's serial? - What are the riskiest parts that need prototyping? When user says "it depends" or "we'll figure it out later": - "What specifically does it depend on?" - "What would need to be true for option A vs. option B?" - "If you had to decide RIGHT NOW, which way would you lean?" When user says something confidently: - "What would change if [assumption] turned out to be wrong?" - "How would you verify that [assumption] is true before building?" - "Have you seen this work elsewhere? What was different?" When everything seems important: - "If you could only ship ONE of these, which one?" - "What would you cut if timeline was halved?" - "Which of these would you be embarrassed NOT to have?" When two answers seem incompatible: - "Earlier you said X, but this suggests Y. How do these reconcile?" - "This creates a tradeoff between A and B. Where do you land?" Interview is complete when: - [ ] All domains have been covered with project-specific questions - [ ] User has made explicit decisions on all tradeoffs - [ ] No "TBD" or "we'll figure it out" remains in critical areas - [ ] Edge cases and failure modes have concrete handling strategies - [ ] The spec could be handed to a developer who would know what to build - [ ] User confirms "this is complete enough to start building" See `templates/spec-template.md` for the output structure.