Claude Code Plugins

Community-maintained marketplace

Feedback

feedback-driven-development

@chriscarterux/chris-claude-stack
1
0

This skill should be used when collecting, analyzing, and acting on user feedback to drive product decisions - covers user interview techniques, feedback categorization, prioritization frameworks, and systematic workflow connecting feedback-synthesizer to sprint-prioritizer to implementation for data-driven product development.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name feedback-driven-development
description This skill should be used when collecting, analyzing, and acting on user feedback to drive product decisions - covers user interview techniques, feedback categorization, prioritization frameworks, and systematic workflow connecting feedback-synthesizer to sprint-prioritizer to implementation for data-driven product development.

Feedback-Driven Development

Overview

Build what users actually need by systematically collecting, analyzing, and acting on feedback. Transform user insights into product improvements through structured processes.

Core principle: Users tell you what's broken. Your job is to find the pattern and fix the root cause.

When to Use

Use when:

  • Planning product roadmap
  • Deciding what to build next
  • Users reporting issues or requests
  • Post-launch evaluation
  • Validating assumptions
  • Prioritizing features

The Feedback Loop

Collect → Synthesize → Prioritize → Build → Measure → Repeat

1. Collect Feedback (Multi-Channel)

Sources:

  • In-app feedback widgets
  • App store reviews
  • Support tickets
  • User interviews
  • Social media mentions
  • Analytics (behavior is feedback)
  • Surveys (targeted questions)

Collection agents:

  • feedback-synthesizer (analyze collected feedback)
  • support-responder (support ticket patterns)
  • analytics-reporter (behavioral data)

2. Synthesize Patterns

Use feedback-synthesizer agent:

@feedback-synthesizer analyze feedback from:
- App store reviews (last 30 days)
- Support tickets (last 30 days)
- In-app feedback submissions

Identify:
- Top 5 pain points
- Feature requests by frequency
- Bug reports by severity
- Sentiment trends

Look for:

  • Repeated complaints (patterns)
  • Surprising requests (blind spots)
  • Emotional language (strong feelings)
  • Churned users' reasons

3. Categorize Feedback

Categories:

  • Bugs: Something broken
  • Feature requests: New functionality
  • UX issues: Confusing or frustrating
  • Performance: Slow or laggy
  • Content: Missing info or unclear
  • Noise: Not actionable

Priority levels:

  • P0 - Critical: Blocking users, losing revenue
  • P1 - High: Frequent pain point, affects many users
  • P2 - Medium: Nice to have, affects some users
  • P3 - Low: Edge cases, minor improvements

4. Prioritize Actions

Use sprint-prioritizer agent:

@sprint-prioritizer help prioritize these items:

Feedback themes from feedback-synthesizer:
1. [Theme 1]: Affects X% of users, severity [level]
2. [Theme 2]: Affects Y% of users, severity [level]

Constraints:
- 6-day sprint cycle
- Team size: [X developers]
- Current priorities: [list]

Recommend: Top 3 items for next sprint

Prioritization framework (RICE):

Score = (Reach × Impact × Confidence) / Effort

Reach: How many users affected (per quarter)
Impact: How much it helps (0.25 = minimal, 3 = massive)
Confidence: How sure are you (0-100%)
Effort: Person-weeks required

5. Validate Before Building

Don't build every request blindly:

Validation questions:

  • Is this the real problem or a symptom?
  • How many users actually need this?
  • What's the underlying job-to-be-done?
  • Can we solve it differently?
  • What's the minimum we can build to test?

Example:

User request: "Add calendar integration"

Underlying need: "I forget to use the app"

Better solution: Push notifications
Faster to build, solves actual problem

6. Build & Ship

Build with rapid-prototyping:

  • Start with MVP of feedback-driven feature
  • Ship to subset of users first
  • Gather feedback on the solution
  • Iterate quickly

7. Close the Loop

Follow up with users:

  • Announce fixes/features to those who requested
  • "Thanks for your feedback on [X]. We shipped [solution]!"
  • Track if it actually solved their problem

User Interview Techniques

When to Interview

Interview for:

  • Understanding "why" behind feedback
  • Discovering unspoken needs
  • Validating new feature ideas
  • Understanding churned users

Don't interview for:

  • Validation of what you want to build (confirmation bias)
  • Getting feature ideas (users aren't product managers)
  • Detailed UX feedback (use usability testing)

Interview Script Pattern

Warm-up (2 min):

  • "Thanks for your time"
  • "Tell me about yourself and how you use [product]"

Problem exploration (10 min):

  • "What's frustrating about [current solution]?"
  • "Walk me through the last time you [did task]"
  • "What workarounds have you tried?"
  • "If you had a magic wand, what would you change?"

Feature validation (5 min):

  • "We're considering [feature]. What do you think?"
  • "How would this fit into your workflow?"
  • "What concerns do you have?"

Wrap-up (3 min):

  • "Anything else you'd like us to know?"
  • "Can we follow up if we have questions?"

Total: 20 minutes

Interview Insights

Listen for:

  • Emotional reactions (strong feelings = important)
  • Workarounds (signals missing functionality)
  • Frequency words ("every time", "always")
  • Jobs-to-be-done (underlying goals)

Red flags:

  • "It would be cool if..." (nice-to-have)
  • "You should add..." (feature suggestion without pain)
  • "I don't know, just better" (vague)

Feedback Metrics to Track

Track systematically:

  • Feedback volume (trending up/down?)
  • Sentiment (positive/negative/neutral %)
  • Top categories (bugs, features, UX)
  • Response time (how fast you address)
  • Resolution rate (% of feedback acted on)

Dashboard view:

This month:
- 234 feedback items
- Sentiment: 65% positive, 25% neutral, 10% negative
- Top category: Performance (34%)
- Avg response: 2.3 days
- Resolution rate: 78%

Trends:
- Negative feedback down 15% (performance fixes working)
- Feature requests up 40% (users more engaged)

Integration with Development Cycle

Sprint Planning Integration

Every sprint:

  1. @feedback-synthesizer analyze last sprint's feedback
  2. @sprint-prioritizer integrate feedback into sprint planning
  3. Select 1-2 feedback-driven items for sprint
  4. Build and ship
  5. Monitor feedback on changes

Feature Validation Pattern

Before building large feature:

1. Collect requests for feature (how many users?)
2. Interview 5-10 users about the need
3. Build lightweight MVP or fake door test
4. Ship to small group
5. Measure usage and feedback
6. Decide: expand, pivot, or kill

Feedback Response Patterns

Responding to Feature Requests

Template:

"Thanks for the suggestion!

We're tracking this idea - [brief acknowledge their point].

Quick question to help us prioritize: [clarifying question about their use case]

We'll keep you posted if this makes it into development."

For popular requests:

"Great timing - this is something we're actively working on!

Expected timeline: [timeframe]
We'll email you when it ships.

Want to beta test it first?"

Responding to Bug Reports

Template:

"Thanks for reporting this!

Reproduced the issue: [brief description]
Priority: [P0/P1/P2]
Fix ETA: [timeline]

We'll follow up when this is resolved.

Temporary workaround: [if available]"

Responding to Negative Feedback

Template:

"Sorry you're having a frustrating experience.

I understand [restate their issue] is impacting your [workflow/experience].

We're [what you're doing about it]:
- [Immediate action]
- [Longer-term fix]

Can we hop on a quick call to understand better? [email/calendly]"

Prioritization Frameworks

Impact × Effort Matrix

High Impact, Low Effort → DO NOW (quick wins)
High Impact, High Effort → PLAN (strategic bets)
Low Impact, Low Effort → LATER (when time permits)
Low Impact, High Effort → NEVER (waste of resources)

Kano Model

Feature categories:

  • Must-have: Expected, absence causes dissatisfaction
  • Performance: More is better (speed, reliability)
  • Delighters: Unexpected, cause satisfaction

Apply:

  • Must-haves: Fix immediately
  • Performance: Continuous improvement
  • Delighters: Differentiation opportunities

Feedback Quality Indicators

High-quality feedback:

  • Specific problem description
  • Context (when, where, how)
  • Impact statement
  • Frequency indication

Low-quality feedback:

  • Vague complaints
  • Feature lists without reasoning
  • "Just make it better"
  • One-off edge cases

Seek clarification for low-quality feedback

Resources

Feedback is a gift. Collect it systematically, analyze it rigorously, act on it strategically.