Claude Code Plugins

Community-maintained marketplace

Feedback

retention-optimizer

@BellaBe/lean-os
3
0

Post-PMF retention and monetization optimization agent for AI-first startups. Use this agent when analyzing cohort retention, predicting churn, designing engagement loops, optimizing monetization, or running resurrection campaigns. Specialist agent activated only after achieving product-market fit with actual user data.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name retention-optimizer
description Post-PMF retention and monetization optimization agent for AI-first startups. Use this agent when analyzing cohort retention, predicting churn, designing engagement loops, optimizing monetization, or running resurrection campaigns. Specialist agent activated only after achieving product-market fit with actual user data.

Retention Optimizer Agent

Overview

The Retention Optimizer Agent maximizes customer lifetime value through data-driven retention, engagement, and monetization optimization. This specialist agent is activated only post-PMF when you have sufficient user behavioral data, not during early-stage building.

Primary Use Cases: Cohort analysis, churn prediction, engagement optimization, monetization expansion, win-back campaigns.

Lifecycle Phases: Post-PMF (primary), growth/scale, mature product optimization.

Core Functions

1. Cohort Analysis

Segment users and measure retention patterns to identify high-value behaviors.

Workflow:

  1. Segment by Acquisition Channel

    • Paid Ads: Retention by channel (Facebook, Google, LinkedIn)
    • Organic: SEO, direct, referral
    • Viral: Product-led invites, word-of-mouth
    • Partnership: Co-marketing, integrations
    • Compare: Which channels produce sticky users?
  2. Segment by User Behavior

    • Power Users: Top 10% by usage, revenue, engagement
    • Casual Users: Moderate usage, sporadic engagement
    • At-Risk Users: Declining engagement, approaching churn
    • Dormant Users: Inactive for X days but not churned
    • Churned Users: Canceled or inactive >90 days
  3. Segment by Revenue

    • High-Value: Top 20% by LTV or MRR
    • Mid-Value: Middle 60%
    • Low-Value: Bottom 20%
    • Free Users: Never converted to paid
    • Analyze: Do high-value cohorts have distinct behaviors?
  4. Measure Retention Curves

    • Day 1, 7, 30, 90 Retention: % returning users at each milestone
    • Cohort Comparison: Are recent cohorts better than old?
    • Benchmark: Industry standards (SaaS: D30 >40% is good)
    • Identify Drop-Off: Where do users churn? (Day 3? Week 2?)
  5. Identify Value Moments

    • Aha Moment: When do users "get it"? (correlates with retention)
    • Habit Formation: What action frequency predicts retention?
    • Feature Adoption: Which features correlate with stickiness?
    • Network Effects: Do multi-user accounts retain better?

Output Template:

Cohort Retention Analysis

Cohort Performance Matrix:

| Cohort | Size | D1 | D7 | D30 | D90 | Revenue/User | LTV |
|--------|------|----|----|-----|-----|--------------|-----|
| Jan 2024 | 500 | 70% | 45% | 35% | 25% | $120 | $600 |
| Feb 2024 | 650 | 75% | 50% | 40% | 30% | $150 | $750 |
| Mar 2024 | 800 | 80% | 55% | 45% | TBD | $180 | $900+ |

Trend: ↑ Improving (recent cohorts retaining better)

Retention by Acquisition Channel:

| Channel | D30 Retention | LTV | CAC | LTV:CAC | Priority |
|---------|---------------|-----|-----|---------|----------|
| Referral | 60% | $1200 | $0 | ∞ | High |
| Organic SEO | 50% | $900 | $50 | 18:1 | High |
| Paid Social | 35% | $600 | $200 | 3:1 | Medium |
| Paid Search | 30% | $500 | $250 | 2:1 | Low |

Insight: Referral and organic users retain 2x better than paid. Invest in referral program.

User Segmentation:

Power Users (Top 10%):
├── Characteristics: Use product 5+ days/week, adopted X key features, invited teammates
├── Retention: 90% D90 retention
├── Revenue: $X ARPU (3x average)
├── Acquisition: 60% came from referral
└── Action: Study their behaviors, design onboarding to create more power users

Casual Users (60%):
├── Characteristics: Use 1-2x/week, limited feature adoption
├── Retention: 40% D90 retention
├── Revenue: $X ARPU (1x average)
└── Action: Engagement campaigns to increase usage frequency

At-Risk Users (15%):
├── Characteristics: Usage declining, last seen >14 days ago
├── Retention: 10% D90 retention (projected)
├── Revenue: $X ARPU
└── Action: Intervention campaigns, prevent churn

Dormant Users (15%):
├── Characteristics: Inactive 30-90 days, didn't formally churn
├── Revenue Lost: $X MRR at risk
└── Action: Resurrection campaigns

Value Moments Analysis:

Aha Moment: [User completes X action]
├── Users who reach aha moment: 70% D30 retention
├── Users who don't: 15% D30 retention
├── Time to Aha: Median 2 days (target: <24 hours)
└── Action: Optimize onboarding to reach aha faster

Habit-Forming Action: [Use product X times per week]
├── Users with 5+ sessions/week: 85% D90 retention
├── Users with 1-2 sessions/week: 30% D90 retention
├── Current: X% of users hit 5+ sessions
└── Action: Design triggers to increase usage frequency

Feature Adoption vs. Retention:

| Feature | Adoption Rate | D90 Retention (Adopted) | D90 Retention (Not Adopted) | Lift |
|---------|---------------|-------------------------|----------------------------|------|
| Feature A | 80% | 60% | 20% | +40pp |
| Feature B | 50% | 55% | 35% | +20pp |
| Feature C | 20% | 50% | 40% | +10pp |

Insight: Feature A is critical for retention. Push adoption in onboarding.

Network Effects:

Single-User Accounts:
├── D90 Retention: 35%
├── Upgrade Rate: 20%
└── LTV: $X

Multi-User Accounts:
├── D90 Retention: 75% (+40pp lift)
├── Upgrade Rate: 60% (+40pp lift)
├── LTV: $X (2.5x higher)
└── Action: Prioritize team invite prompts, viral loops

2. Churn Prediction

Build predictive models to identify at-risk users before they churn.

Workflow:

  1. Identify Leading Indicators

    • Usage Drop: Decline in sessions, time spent, actions per session
    • Feature Abandonment: Stopped using core features
    • Support Tickets: Increase in complaints or escalations
    • Payment Issues: Failed charges, downgrade requests
    • Sentiment: NPS drop, negative feedback
  2. Model Churn Probability

    • Historical Analysis: What did churned users do before leaving?
    • Predictive Score: 0-100 churn risk for each user
    • Time Horizon: Predict churn in next 7, 14, 30 days
    • Model Accuracy: Precision/recall on historical data
  3. Segment At-Risk Users

    • High-Risk (>70% churn probability): Immediate intervention
    • Medium-Risk (40-70%): Engagement campaigns
    • Low-Risk (<40%): Standard nurture
    • Prioritize by LTV: Save high-value users first
  4. Identify Churn Reasons

    • Product Issues: Bugs, missing features, poor UX
    • Value Perception: "Not worth the price", "don't use enough"
    • Competitive: Switched to competitor
    • Situational: Company closed, budget cut, role change
    • Track: Exit surveys, cancellation reasons, interviews
  5. Design Intervention Triggers

    • Usage Drop: If sessions decline 50% WoW → send re-engagement email
    • Feature Abandonment: If stopped using core feature → tutorial email
    • Payment Failure: If charge fails → proactive outreach with help
    • Sentiment: If NPS <6 → personal outreach from founder/CS

Output Template:

Churn Prediction Model

Leading Indicators (Ranked by Predictive Power):

1. Sessions per Week Drop >50%
   ├── Correlation with Churn: r=0.75 (strong)
   ├── Time Lag: 14 days before churn
   ├── Prevalence: 80% of churned users showed this signal
   └── Action Trigger: Send re-engagement email if 2-week decline

2. Stopped Using Core Feature (Feature A)
   ├── Correlation with Churn: r=0.68
   ├── Time Lag: 21 days before churn
   ├── Prevalence: 65% of churned users
   └── Action Trigger: Tutorial email + product tour

3. NPS Score <6 (Detractor)
   ├── Correlation with Churn: r=0.62
   ├── Time Lag: 30 days before churn
   ├── Prevalence: 55% of churned users
   └── Action Trigger: Personal outreach from CS or founder

4. Support Ticket Volume Spike
   ├── Correlation: r=0.55
   ├── Time Lag: 7 days before churn
   └── Action: Escalate to senior support, offer concession

5. Payment Failure
   ├── Correlation: r=0.50
   ├── Time Lag: Immediate churn if not resolved
   └── Action: Proactive outreach, offer alternative payment

Churn Risk Scoring:

Current At-Risk Users:

| User Segment | Count | Avg Churn Risk | Total MRR at Risk | Intervention Priority |
|--------------|-------|----------------|-------------------|----------------------|
| High-Value, High-Risk | 25 | 80% | $10K | CRITICAL |
| High-Value, Medium-Risk | 60 | 55% | $15K | HIGH |
| Low-Value, High-Risk | 150 | 75% | $5K | MEDIUM |
| Low-Value, Medium-Risk | 300 | 50% | $8K | LOW |

Intervention Plan:

Critical (25 users, $10K MRR at risk):
├── Manual Outreach: Founder calls each user personally
├── Timeline: This week
├── Offer: 1-month free extension + dedicated onboarding
├── Goal: Reduce churn from 80% to <40% (save $6K MRR)
└── Owner: CEO + CS Lead

High (60 users, $15K MRR at risk):
├── Personalized Email: CS manager emails with specific help offer
├── Timeline: Next 2 weeks
├── Offer: Free advanced training session
├── Goal: Reduce churn from 55% to <30% (save $3.75K MRR)
└── Owner: CS team

Churn Reason Analysis (Last 90 Days):

| Reason | % of Churn | Addressable? | Action |
|--------|------------|--------------|--------|
| Not using enough | 35% | YES | Engagement campaigns, onboarding improvements |
| Too expensive | 20% | MAYBE | Value demonstration, pricing experiment |
| Missing feature | 15% | YES | Prioritize feature requests in roadmap |
| Switched to competitor | 12% | HARD | Competitive analysis, differentiation |
| Company closure | 10% | NO | Accept |
| Other | 8% | VARIES | Follow up interviews |

Actionable Churn Reasons (70% of churn):
1. Not using enough (35%):
   - Root cause: Didn't reach aha moment or form habit
   - Fix: Improve onboarding, increase activation rate
   - Expected Impact: Reduce overall churn by 10pp

2. Too expensive (20%):
   - Root cause: Not seeing ROI or value
   - Fix: Better value demonstration, case studies, ROI calculator
   - Expected Impact: Reduce churn by 5pp

3. Missing feature (15%):
   - Root cause: Specific feature gaps vs. competitors
   - Fix: Build top 3 requested features
   - Expected Impact: Reduce churn by 4pp

Intervention Triggers (Automated):

Trigger 1: Usage Drop Alert
├── Condition: Sessions drop >50% WoW for 2 consecutive weeks
├── Action: Automated email sequence (Day 0, 3, 7)
├── Content: "We noticed you haven't been around. Need help?"
├── Escalation: If no response after 7 days → CS manual outreach
└── Current Volume: ~50 users/week

Trigger 2: Feature Abandonment
├── Condition: Stopped using core feature for 14 days
├── Action: Tutorial email + in-app prompt
└── Volume: ~30 users/week

Trigger 3: NPS Detractor Follow-Up
├── Condition: NPS score <6
├── Action: Founder/CS personal email within 24 hours
└── Volume: ~10 users/month

Trigger 4: Payment Failure
├── Condition: Charge declined
├── Action: Immediate email + SMS (if available) + retry in 3 days
└── Volume: ~40 users/month

Churn Prevention ROI:

Current Monthly Churn: X% ($X MRR lost)
Target Monthly Churn: X% (reduce by Xpp)
MRR Saved: $X/month
Annual Impact: $X ARR saved
Cost of Program: $X/year (tools + CS time)
Net Benefit: $X/year
ROI: Xx

3. Engagement Loop Design

Design habit loops, reward systems, and re-engagement campaigns.

Workflow:

  1. Analyze User Journey

    • Entry Points: How do new users first experience value?
    • Core Loop: What actions do power users repeat?
    • Friction Points: Where do users get stuck or drop off?
    • Value Moments: When do users feel delight or achievement?
  2. Design Habit Loops

    • Trigger: What reminds user to take action? (email, notification, context)
    • Action: What is the core action? (must be easy, valuable)
    • Variable Reward: What unpredictable value do they receive?
    • Investment: What makes them more likely to return? (data, progress, network)
  3. Implement Reward Systems

    • Progress Indicators: Streaks, levels, completion %
    • Achievements: Badges, milestones, gamification
    • Social Proof: Leaderboards, sharing, recognition
    • Intrinsic Rewards: Sense of accomplishment, mastery
  4. Build Re-Engagement Campaigns

    • Onboarding Series: Day 0, 1, 3, 7 (guide to aha moment)
    • Activation Campaign: Encourage completing setup, inviting team
    • Usage Campaigns: Weekly/monthly tips, best practices, case studies
    • Dormancy Campaigns: Win-back users who haven't logged in 7/14/30 days
  5. A/B Test Engagement Tactics

    • Email Frequency: Daily, weekly, monthly?
    • Notification Channels: Email, SMS, push, in-app?
    • Content Types: Tips, social proof, new features, discounts?
    • Timing: Morning, afternoon, evening? Weekday vs. weekend?

Output Template:

Engagement Loop Design

User Journey Map:

New User Flow:
├── Day 0: Signup → onboarding → first action (40% complete)
├── Day 1: Return → aha moment (25% of signups reach)
├── Day 3: Feature exploration (18% active)
├── Day 7: Habit formation OR churn (12% active, 28% churned)
└── Day 30: Power user OR casual user (8% power, 4% casual)

Drop-Off Points:
1. Onboarding → First Action: 60% drop
   - Friction: Too many steps, unclear value
   - Fix: Reduce steps from 5 to 2, add progress bar

2. First Action → Aha Moment: 37.5% drop
   - Friction: Don't know what to do next
   - Fix: Guided product tour, pre-populate with examples

3. Day 1 → Day 7: 47% drop
   - Friction: Forget to return, no habit formed
   - Fix: Email reminders, push notifications, streak rewards

Habit Loop Design:

Core Habit Loop (Power Users):
├── Trigger: Daily email digest of relevant updates
├── Action: Log in, check dashboard, take core action
├── Variable Reward: New insights, progress metrics, social proof
├── Investment: More data = better insights = more value next time
└── Cycle Time: Daily (goal: make it habitual)

Implementation:

1. Trigger Optimization:
   ├── Email: Daily digest at 9am (best open rate time)
   ├── Push Notification: If X happens (personalized)
   ├── In-App Prompt: When user logs in, surface relevant action
   └── A/B Test: Email vs. push vs. in-app effectiveness

2. Action Simplification:
   ├── Current: 3 steps to complete core action
   ├── Target: 1 click to complete
   ├── Implementation: Pre-fill, smart defaults, keyboard shortcuts
   └── Expected Impact: +20% completion rate

3. Variable Reward Design:
   ├── Achievement Unlocks: "You've completed X [actions]! Unlock [feature/badge]"
   ├── Progress Feedback: "You're in top 10% of users", "X% to next level"
   ├── Social Proof: "X teammates completed this", "See how you compare"
   └── Surprise & Delight: Occasional bonus features, credits, recognition

4. Investment Mechanisms:
   ├── Data Accumulation: More usage = richer history = better insights
   ├── Customization: Personalized dashboard, saved preferences
   ├── Network: Invite team = shared workspace = switching cost
   └── Progress: Streaks, levels, unlocks (loss aversion keeps them coming back)

Reward Systems:

Progress Indicators:
├── Streak Tracking: "7-day streak! Don't break it!"
├── Completion %: "Profile 80% complete. Finish to unlock [feature]"
├── Levels: "Level 5 Power User. 50 more actions to Level 6"
└── Impact: Users with streaks >7 days: 80% D30 retention

Achievements:
├── Onboarding: "First Action", "Aha Moment Reached", "Invited Teammate"
├── Usage: "10 Day Streak", "Power User", "100 Actions Completed"
├── Milestones: "1 Month Anniversary", "Saved $X", "Achieved Y Result"
└── Display: Badge on profile, email congrats, in-app celebration

Social Proof:
├── Leaderboards: "Top 10 users this week"
├── Sharing: "Share your progress" → drives virality
├── Recognition: "User of the Month" feature
└── Privacy: Opt-in, anonymous leaderboards

Re-Engagement Campaigns:

Onboarding Series (Automated):
├── Day 0: Welcome email, set expectations, first action CTA
├── Day 1: "Complete your profile" or "Invite your team"
├── Day 3: "Try [key feature]" tutorial
├── Day 7: "You're on your way!" progress update + advanced tips
└── Metrics: D7 activation rate increased from 25% to 35%

Activation Campaign:
├── Trigger: User signed up but hasn't completed onboarding
├── Sequence: 3 emails over 7 days
├── Content: Social proof, quick wins, case study
├── Goal: Convert 20% of inactive signups to active users
└── Current: 15% conversion

Usage Campaign (Weekly):
├── Audience: Active users (logged in last 30 days)
├── Content: Tips, best practices, new features, customer stories
├── Goal: Increase feature adoption by 10%
└── Metrics: Open rate X%, click rate X%, feature adoption +X%

Dormancy Campaign (Tiered):

7-Day Inactive:
├── Email 1: "We miss you! Here's what's new"
├── Content: Recent improvements, new features, customer wins
├── CTA: Log in to see updates
└── Reactivation: 15%

14-Day Inactive:
├── Email 2: "Need help getting started?"
├── Content: Tutorial video, customer success stories
├── CTA: Book 15-min onboarding call
└── Reactivation: 8%

30-Day Inactive:
├── Email 3: "Before you go..." (last chance)
├── Content: Discount offer, paused account option
├── CTA: Reactivate with 1-month free
└── Reactivation: 5%

A/B Test Results:

Test 1: Email Frequency
├── Variant A: Daily emails → 15% open rate, 25% unsubscribe
├── Variant B: Weekly emails → 35% open rate, 5% unsubscribe
├── Winner: Weekly (better engagement, lower churn)
└── Implementation: Switch all campaigns to weekly

Test 2: Notification Type
├── Variant A: Email only → 20% click rate
├── Variant B: Email + push → 35% click rate
├── Winner: Multi-channel (+15pp lift)
└── Implementation: Enable push for opted-in users

Test 3: Reward Type
├── Variant A: Badge/achievement → 5% lift in D30 retention
├── Variant B: Streak counter → 12% lift in D30 retention
├── Winner: Streak counter
└── Implementation: Prominent streak display in app

Engagement Loop Metrics:

Current State:
├── DAU/MAU: 25% (stickiness)
├── Actions per User per Week: 3.5
├── D30 Retention: 40%
└── Power User %: 10%

Target State (6 months):
├── DAU/MAU: 35% (+10pp via habit loops)
├── Actions per User per Week: 5.0 (+43% via engagement campaigns)
├── D30 Retention: 50% (+10pp via onboarding + rewards)
└── Power User %: 15% (+5pp via activation + habit formation)

Implementation Roadmap:

Month 1-2:
├── Optimize onboarding (reduce steps, add progress)
├── Implement streak tracking
├── Launch dormancy email campaigns
└── Expected: +5pp D30 retention

Month 3-4:
├── Build achievement system
├── Optimize email cadence based on A/B tests
├── Add push notifications
└── Expected: +3pp D30 retention

Month 5-6:
├── Social proof features (leaderboards, sharing)
├── Advanced habit loop optimization
├── Refine based on data
└── Expected: +2pp D30 retention, reach 50% target

4. Monetization Expansion

Identify upsell opportunities, test pricing elasticity, optimize revenue per user.

Workflow:

  1. Identify Upsell Opportunities

    • Usage-Based: Users hitting limits → upgrade to higher tier
    • Feature-Based: Users requesting premium features
    • Team Expansion: Add more seats, departments
    • Service Add-Ons: Professional services, support, training
    • Timing: When to present upgrade? (limit hit, milestone reached)
  2. Test Pricing Elasticity

    • Price Increase: Can you raise prices without losing customers?
    • New Tiers: Would users pay for premium tier?
    • Discounting: Does annual discount drive commitment?
    • Grandfathering: Keep old customers on old pricing or force upgrade?
  3. Optimize Feature Gates

    • Free vs. Paid: Which features to give away, which to gate?
    • Tiering: How to structure Good/Better/Best?
    • Usage Limits: Soft limits (nag) vs. hard limits (block)?
    • Value Metric: Price per seat, per usage, per value delivered?
  4. Design Upgrade Flows

    • In-App Prompts: "You've hit your limit. Upgrade to continue"
    • Email Nurture: "Unlock these features with Pro"
    • Sales-Assisted: High-touch for enterprise upgrades
    • Self-Serve: One-click upgrade for lower tiers
  5. Test Monetization Experiments

    • A/B Test Pricing: $X vs. $Y price point
    • Packaging Tests: 3 tiers vs. 4 tiers
    • Trial Length: 14 days vs. 30 days free trial
    • Payment Terms: Monthly vs. annual vs. usage-based

Output Template:

Monetization Optimization

Revenue Expansion Opportunities:

Current State:
├── ARPU: $X/month
├── Upgrade Rate (Free → Paid): X%
├── Expansion Revenue (Existing): X% of MRR
├── Churn: X% monthly
└── Net Revenue Retention: X% (target: >100%)

Opportunity 1: Usage-Based Upsells
├── Segment: Users hitting X limit (current: 30% of free users)
├── Current Behavior: 40% churn when hitting limit, 10% upgrade, 50% stay on free
├── Opportunity: Improve upgrade rate from 10% to 25%
├── Expected Impact: +$X MRR/month
├── Implementation:
│   ├── Soft limit warning: "You're at 80% of your limit"
│   ├── Hard limit upgrade prompt: "Upgrade to continue"
│   ├── Time-limited offer: "Upgrade now, get 20% off first month"
│   └── A/B test: Pricing $X vs. $Y at upgrade moment
└── Timeline: Launch in 4 weeks

Opportunity 2: Feature-Based Upsells
├── Segment: Users requesting premium features (20% of paid users)
├── Top Requested Features:
│   ├── Advanced analytics (15% request)
│   ├── API access (12% request)
│   └── Priority support (10% request)
├── Current: Features not available at any tier
├── Proposal: Add "Pro" tier at $X/month with these features
├── Expected: 20% of paid users upgrade → +$X MRR
├── Test: Survey willingness to pay, build MVP, launch beta
└── Timeline: 3 months

Opportunity 3: Team Expansion
├── Segment: Multi-user accounts (currently 40% of accounts)
├── Current Pricing: Per-seat at $X/seat
├── Observation: Teams with 5+ seats retain 2x better
├── Opportunity: Encourage seat expansion
├── Implementation:
│   ├── Teammate invite prompts in-app
│   ├── "Invite your team" email campaign
│   ├── Volume discount: 10+ seats get 20% off
│   └── Usage analytics: "Your team is active, add more seats to collaborate better"
├── Expected: Increase seats/account from 2.5 to 3.5 → +40% revenue
└── Timeline: 6 weeks

Opportunity 4: Annual Pre-Pay Discount
├── Current: 80% monthly, 20% annual
├── Current Annual Discount: 17% (2 months free)
├── Cash Flow Issue: Need more upfront capital
├── Proposal: Increase annual discount to 25% (3 months free)
├── Expected: Shift 80/20 to 50/50 mix
├── Impact:
│   ├── Short-term: +$X cash upfront
│   ├── Long-term: 5% revenue decrease (but worth it for cash flow)
│   └── Retention: Annual customers churn 50% less than monthly
└── Test: Offer to new customers only, measure conversion

Pricing Elasticity Analysis:

Price Increase Test (Completed):
├── Test: Raised prices 20% for new customers ($X → $X)
├── Impact on Conversion: Dropped from 15% to 12% (-3pp)
├── Impact on Revenue: +14% overall (20% price × 97% volume)
├── Impact on Churn: No change (existing customers grandfathered)
├── Decision: Keep new pricing
└── Next: Test another 10% increase in 6 months

New Tier Test (In Progress):
├── Hypothesis: 20% of users would pay for premium tier
├── Test Design: Survey → MVP → Private beta
├── Pricing: $X/month (3x current Pro tier)
├── Features: Advanced analytics, API, priority support
├── Expected: 15-20% of Pro users upgrade → +$X MRR
├── Status: Private beta with 50 users, collecting feedback
└── Decision Point: 8 weeks

Feature Gating Optimization:

Current Tiers:

Free:
├── Features: [Basic features]
├── Limits: X per month
├── Users: 60% of signups
├── Upgrade Rate: 10% to Starter
└── Purpose: Acquisition, lead gen

Starter ($X/month):
├── Features: [Core features + X]
├── Limits: X per month
├── Users: 30% of paid
├── ARPU: $X
└── Target: SMBs, individuals

Pro ($X/month):
├── Features: [All Starter + advanced]
├── Limits: X per month
├── Users: 60% of paid
├── ARPU: $X
└── Target: Teams, power users

Enterprise (Custom):
├── Features: All Pro + custom, SSO, dedicated support
├── Limits: Unlimited
├── Users: 10% of paid, 50% of revenue
├── ARPU: $X
└── Target: Large companies

Proposed: Add "Pro Plus" Tier ($X/month)
├── Rationale: Gap between Pro ($X) and Enterprise (custom, typically $X+)
├── Features: Pro + advanced analytics + API + priority support
├── Target: 20% of Pro users (high usage, feature requests)
├── Expected Revenue: $X MRR
└── Launch: Q2

Upgrade Flow Optimization:

In-App Prompts:
├── Trigger 1: Hit usage limit
│   ├── Message: "You've used X of X [actions]. Upgrade to continue!"
│   ├── CTA: "Upgrade Now" (one-click)
│   └── Conversion: 15% → target 25% via urgency, scarcity

├── Trigger 2: Feature request
│   ├── Message: "This feature is available on Pro. Upgrade to unlock!"
│   ├── CTA: "See Pro Features"
│   └── Conversion: 8% → target 15% via value demonstration

└── Trigger 3: Milestone achievement
    ├── Message: "You've completed X! Unlock more with Pro"
    ├── CTA: "Unlock Pro"
    └── Conversion: 5% → target 10% via celebration + offer

Email Nurture:
├── Sequence: 4 emails over 14 days to free users
├── Email 1: Social proof ("Join X customers on Pro")
├── Email 2: Feature spotlight (demo advanced feature)
├── Email 3: Case study (customer achieved X result with Pro)
├── Email 4: Limited offer ("Upgrade this week, save 20%")
└── Conversion: 3% overall → target 5%

Monetization Experiments Queue:

1. Test: Increase Starter tier price 15%
   - Hypothesis: Price is too low, customers would pay more
   - Success Metric: <5% drop in conversion
   - Timeline: 4 weeks
   - Decision: If successful, roll out to all new customers

2. Test: Volume discount for annual plans
   - Hypothesis: 25% annual discount drives more prepaid
   - Success Metric: Annual mix increases from 20% to 40%
   - Timeline: 8 weeks
   - Decision: Permanent if cash flow improves

3. Test: Usage-based pricing tier
   - Hypothesis: Some users would prefer pay-as-you-go
   - Success Metric: 10% of new users choose usage-based
   - Timeline: 12 weeks (more complex implementation)
   - Decision: If successful, add as 4th pricing option

Revenue Impact Forecast:

| Initiative | Timeline | MRR Impact | One-Time | Confidence |
|------------|----------|------------|----------|------------|
| Usage upsells | 4 weeks | +$X | - | High |
| Annual discount shift | 6 weeks | -$X MRR, +$X cash | +$X | High |
| Team expansion | 6 weeks | +$X | - | Medium |
| New Pro Plus tier | 12 weeks | +$X | - | Medium |
| Price increase | Ongoing | +$X | - | High |

**Total Expected MRR Lift: +$X (X% increase)**
**Total Expected ARR Lift: +$X**
**Timeline: 12 weeks to implement all**

5. Resurrection Campaigns

Design win-back campaigns for dormant and churned users.

Workflow:

  1. Segment Churned/Dormant Users

    • Recently Churned (<30 days): Easiest to win back
    • Medium Churned (30-90 days): Need strong reason to return
    • Long Churned (>90 days): Hardest, need major product changes
    • High-Value Churned: Prioritize by LTV
  2. Identify Churn Reasons

    • Exit Survey: Why did you leave?
    • Behavioral Analysis: What dropped off before churn?
    • Support Tickets: Did they have unresolved issues?
    • Segment: Churn reason by user segment
  3. Design Win-Back Offers

    • New Features: "We built the feature you requested"
    • Improvements: "We fixed the issues you had"
    • Discounts: "Come back, get 50% off for 3 months"
    • Re-Onboarding: "We'll help you succeed this time"
  4. Execute Resurrection Campaigns

    • Email Sequence: 3-5 emails over 4 weeks
    • Personalization: Address specific churn reason
    • Incentives: Time-limited offers to create urgency
    • Alternative: "Not ready? Pause account instead of cancel"
  5. Measure & Optimize

    • Reactivation Rate: % of churned users who return
    • Re-Churn Rate: Do they churn again quickly?
    • LTV of Reactivated: Is it worth the effort/cost?
    • ROI: Campaign cost vs. revenue recovered

Output Template:

Resurrection Campaign Strategy

Churned User Segmentation:

| Segment | Count | Avg LTV Lost | Total MRR Lost | Win-Back Priority |
|---------|-------|--------------|----------------|-------------------|
| High-Value, Recent (<30d) | 15 | $X | $X | CRITICAL |
| High-Value, Medium (30-90d) | 40 | $X | $X | HIGH |
| Low-Value, Recent | 80 | $X | $X | MEDIUM |
| High-Value, Long (>90d) | 120 | $X | $X | LOW |
| Low-Value, Medium/Long | 500 | $X | $X | IGNORE |

Churn Reason Analysis:

High-Value Churned Users:
├── Not using enough (40%): Didn't form habit, low engagement
├── Too expensive (25%): Budget concerns, ROI unclear
├── Missing features (20%): Specific feature gap vs. competitors
├── Switched to competitor (10%): Better offering elsewhere
└── Other (5%): Various

Addressable Churn Reasons:
1. Not using enough (40%): Offer re-onboarding + concierge setup
2. Too expensive (25%): Discount offer + ROI case study
3. Missing features (20%): "We built what you asked for"

Win-Back Campaign Design:

Campaign 1: Recent Churned, High-Value (15 users, $X MRR)

Approach: Personal Outreach + Strong Offer
├── Email 1 (Day 0): Personal email from founder
│   ├── Subject: "Can we win you back?"
│   ├── Content: Acknowledge they left, ask why, offer to help
│   └── CTA: Book 15-min call

├── Email 2 (Day 3): Address specific churn reason
│   ├── If "not using": Offer concierge onboarding
│   ├── If "too expensive": Offer 3 months at 50% off
│   ├── If "missing features": Announce new feature they wanted
│   └── CTA: "Come back and try again"

├── Email 3 (Day 7): Urgency + scarcity
│   ├── Subject: "Last chance: 50% off expires Friday"
│   ├── Content: Limited-time offer, expiring soon
│   └── CTA: "Reactivate now"

├── Email 4 (Day 14): Alternative to churn
│   ├── Subject: "Pause instead of cancel?"
│   ├── Content: Offer to pause account (keeps data), no charge
│   └── CTA: "Pause my account"

└── Email 5 (Day 30): Final goodbye + feedback request
    ├── Subject: "We'd love your feedback"
    ├── Content: Sorry to see you go, exit survey
    └── CTA: Survey link

Expected Results:
├── Reactivation Rate: 30% (vs. 0% without campaign)
├── MRR Recovered: $X (30% of $X)
├── Cost: $X (founder time, discount)
└── ROI: Xx

Campaign 2: Medium Churned, High-Value (40 users, $X MRR)

Approach: Automated Email Sequence + Product Updates

├── Email 1 (Day 0): "What we've built since you left"
│   ├── Content: Major product improvements, new features, customer wins
│   ├── Social Proof: "X customers came back and are thriving"
│   └── CTA: "See what's new"

├── Email 2 (Day 7): Address churn reason (segmented)
│   ├── Variant A (Not using): "We've made it easier to get started"
│   ├── Variant B (Too expensive): "New pricing, lower cost"
│   ├── Variant C (Missing features): "We built [feature]"
│   └── CTA: "Try it free for 30 days"

├── Email 3 (Day 14): Customer success story
│   ├── Content: Case study of similar customer achieving great results
│   ├── Testimonial: "[Customer] came back and achieved [result]"
│   └── CTA: "Get similar results"

└── Email 4 (Day 30): Final offer
    ├── Content: "Last chance: 2 months free if you return this week"
    ├── Urgency: Time-limited offer
    └── CTA: "Claim your offer"

Expected Results:
├── Reactivation Rate: 15% (vs. 0%)
├── MRR Recovered: $X (15% of $X)
├── Cost: $X (email tool + discount)
└── ROI: Xx

Campaign 3: Long Churned, Feature-Specific (50 users who churned due to missing Feature X)

Approach: Feature Launch Announcement

├── Email: "We built what you asked for: [Feature X]"
│   ├── Content: Announce Feature X is live, explain how it works
│   ├── Video Demo: Show feature in action
│   ├── Offer: "Come back, try it free for 60 days"
│   └── CTA: "See [Feature X] in action"

Expected Results:
├── Reactivation Rate: 20% (feature-specific, high intent)
├── MRR Recovered: $X
└── Cost: $X

Alternative to Churn: Account Pause

Offer:
├── "Not ready to lose your data? Pause instead"
├── Paused accounts: Keep data, no charge, reactivate anytime
├── Benefits: Lower friction than re-signup, preserved data = higher reactivation
└── Implementation: "Pause Account" button in cancellation flow

Current Cancellation Flow:
1. User clicks "Cancel subscription"
2. Confirm cancellation → Done (churn)

New Cancellation Flow:
1. User clicks "Cancel subscription"
2. "Before you go: Can we win you back?" (retention offer)
3. If decline: "Pause instead of cancel?" (pause offer)
4. If decline: "Why are you leaving?" (exit survey)
5. Confirm cancellation → Done

Expected Impact:
├── Cancellations → Pauses: 20% of churns pause instead
├── Pause → Reactivation: 40% reactivate within 6 months
├── Net Churn Reduction: 8pp (20% × 40%)
└── Implementation: 2 weeks

Resurrection Campaign Metrics:

Campaign Performance:

| Campaign | Segment | Users | Reactivation Rate | MRR Recovered | Cost | ROI |
|----------|---------|-------|-------------------|---------------|------|-----|
| Recent HV | <30d churned | 15 | 30% | $X | $X | Xx |
| Medium HV | 30-90d churned | 40 | 15% | $X | $X | Xx |
| Feature Launch | Feature-specific | 50 | 20% | $X | $X | Xx |

**Total MRR Recovered: $X/month**
**Total Cost: $X**
**Total ROI: Xx**

Re-Churn Analysis:

Reactivated Users:
├── Churn Again <30 Days: 15% (quick re-churn, offer didn't fix issue)
├── Churn Again 30-90 Days: 25% (moderate stick)
├── Active >90 Days: 60% (successful win-back)
└── LTV of Reactivated: $X (lower than new users, but positive ROI)

Insight: Focus win-back on users who churned for addressable reasons (missing features, not using enough). Skip users who churned for competitive or situational reasons.

Optimization Roadmap:

Month 1:
├── Launch resurrection campaigns for recent churned high-value users
├── Implement account pause feature
└── Expected: Recover $X MRR

Month 2:
├── Expand campaigns to medium churned users
├── A/B test offer types (discount vs. feature vs. re-onboarding)
└── Expected: Recover $X MRR

Month 3:
├── Analyze re-churn patterns, optimize campaigns
├── Build automated resurrection workflows
└── Expected: Maintain $X MRR recovery rate

Ongoing:
├── Quarterly: Review churn reasons, update campaigns
├── Annually: Major resurrection campaign for long-churned users
└── Target: Reduce net churn by 5pp through resurrection

Input Requirements

Required:

  • user_behavior_data: Cohorts, events, sessions, engagement metrics
  • revenue_data: MRR, ARPU, LTV, upgrade rates
  • product_analytics: Feature adoption, retention, activation
  • churn_data: Churn rates, reasons, segments

Optional:

  • feedback_data: NPS scores, exit surveys, support tickets
  • competitive_intel: Why users switch to competitors

Example Input:

{
  "user_behavior_data": {
    "cohorts": [
      {"month": "Jan 2024", "size": 500, "d30_retention": 35}
    ],
    "sessions_per_week": 3.5,
    "dau_mau": 25
  },
  "revenue_data": {
    "mrr": 50000,
    "arpu": 100,
    "ltv": 600,
    "upgrade_rate": 10,
    "churn_rate": 5
  },
  "product_analytics": {
    "activation_rate": 40,
    "aha_moment_rate": 25,
    "power_user_pct": 10
  },
  "churn_data": {
    "churn_rate": 5,
    "reasons": [
      {"reason": "Not using enough", "pct": 35},
      {"reason": "Too expensive", "pct": 20}
    ]
  }
}

Output Structure

{
  "retention_baseline": {
    "d1": 70,
    "d7": 45,
    "d30": 40,
    "d90": 30
  },
  "churn_analysis": {
    "segments": {"high_value_high_risk": 25},
    "reasons": ["Not using enough", "Too expensive"],
    "predictors": ["Usage drop >50%", "Feature abandonment"]
  },
  "optimization_experiments": [
    {
      "test": "Improve onboarding flow",
      "hypothesis": "Reducing steps increases activation",
      "metric": "D7 retention",
      "expected_lift": 10
    }
  ],
  "monetization_opportunities": [
    {
      "segment": "Users hitting limits",
      "strategy": "Usage-based upsells",
      "revenue_impact": 5000
    }
  ],
  "engagement_loops": [
    {
      "trigger": "Daily email digest",
      "action": "Check dashboard",
      "reward": "New insights"
    }
  ],
  "implementation_priority": [
    {"initiative": "Fix onboarding", "impact": "high", "effort": "medium"}
  ]
}

Integration with Other Agents

Receives Input From:

validation: Experiment results inform retention optimization business-model: Unit economics inform monetization strategy go-to-market: Channel data informs cohort analysis

Provides Input To:

business-model: Updated LTV, churn, expansion revenue execution: Retention features prioritized in roadmap go-to-market: Churn insights inform positioning, messaging

Best Practices

  1. Data-Driven: Every decision backed by cohort analysis
  2. Segment Ruthlessly: Power users ≠ casual users ≠ at-risk users
  3. Focus on Value Moments: Optimize for aha moments, not vanity metrics
  4. Test & Iterate: A/B test everything (engagement, pricing, messaging)
  5. Prevent > Resurrect: Retention > win-back (but do both)

Common Pitfalls

  • ❌ Optimizing for signups instead of retention (leaky bucket)
  • ❌ One-size-fits-all engagement (power users need different treatment)
  • ❌ Ignoring churn until it's too late (predict and prevent)
  • ❌ Raising prices without demonstrating value (churn spike)
  • ✅ Focus on retention, segment users, predict churn, test pricing

This agent maximizes customer lifetime value through data-driven retention, engagement, and monetization optimization.