| name | opportunity-solution-tree |
| description | Guide for creating Opportunity Solution Trees (OST) for pre-PMF startups. Use when discussing product discovery, problem validation, customer research, or when helping startups identify problems worth solving. Use for queries about OST framework, problem-solution mapping, or validating product ideas. |
The Opportunity Solution Tree: A Detailed Guide for Pre-PMF Startups
What It Actually Is
The Opportunity Solution Tree is a framework for exploring and mapping the problem space before committing to solutions. For pre-PMF startups, it's less about optimizing existing metrics and more about discovering which problems are worth solving and for whom.
Think of it as a systematic way to avoid building something nobody wants. Instead of jumping from "I have an idea" to "let's build it," the OST forces you to map out: what you're trying to learn or achieve, what problems exist in your target market, what you might build, and how you'll test if you're right.
The tree is a living document—not a one-time planning exercise. As you learn from experiments and customer conversations, opportunities shift in priority, new ones emerge, and solutions evolve or get discarded. This continuous discovery process is what helps pre-PMF startups navigate from uncertainty to product-market fit.
The Four Levels Explained
1. Outcome (The Root)
What it should be: Your desired outcome at the pre-PMF stage isn't typically a polished KPI. It's more like a learning goal or an early traction signal that indicates you're onto something real.
Good pre-PMF outcomes:
- "Validate that [ICP] will pay for a solution to [problem area]"
- "Get 10 companies in [industry] actively using our prototype weekly"
- "Identify which segment within [broad ICP] has the most urgent need"
- "Achieve $10K MRR with manual, non-scalable processes"
- "Find the problem worth building a company around in [market space]"
Not good outcomes:
- "Build an AI-powered platform" (that's a solution)
- "Launch our product" (that's an activity)
- "Become the leading provider in..." (too vague, too distant)
- Precise percentage improvements of metrics you don't have yet
The key principle: Your outcome should be specific enough to guide decisions but humble enough to acknowledge uncertainty. Pre-PMF, you're often trying to learn something fundamental about your market, not optimize something you've already proven.
2. Opportunities (The First Branches)
What they should be: Real problems, pain points, needs, or "jobs to be done" that your ICP experiences. These come from customer conversations, observations, and research—not from your assumptions.
Critical distinction: Opportunities are problems in the customer's world, not gaps in the market or ideas you have. They should be framed from the customer's perspective.
Good opportunities (for a startup targeting small e-commerce brands):
- "We struggle to understand which marketing channels actually drive profitable customers"
- "Our inventory is constantly out of sync across platforms, causing overselling"
- "We can't afford a full-time developer but need custom integrations between our tools"
- "Customer support takes 40% of our time but we can't afford to hire help"
Not good opportunities:
- "No good AI-powered analytics exist for SMBs" (that's a market gap, not a customer problem)
- "Shopify's reporting is limited" (too solution-focused, not about the actual impact)
- "They need better data" (too vague—better for what purpose?)
- "They don't use any automation" (that's an observation, not a problem)
The "so what?" test: For each opportunity, you should be able to ask "so what?" and get to real consequences. "They don't have good analytics" → So what? → "They waste money on ads that don't work and miss their best opportunities" → That's the real opportunity.
Opportunity altitude—getting it right:
- Too high: "They want to grow their business" (true for everyone, not actionable)
- Too low: "The export button is on the wrong side of the screen" (too specific, too solution-adjacent)
- Just right: "They spend 2 hours per week manually copying data between systems because they can't figure out the integration"
The test: Can you design multiple different solutions for this opportunity? If not, it might be too specific. Does it describe a real situation with real consequences? If not, it might be too generic.
3. Solutions (The Next Layer)
What they should be: Specific ideas for how you might address an opportunity. At pre-PMF, these should range from very lightweight to more built-out, and you should have multiple solutions per opportunity.
Good solutions (for the opportunity "struggle to understand which channels drive profitable customers"):
- Weekly email digest showing revenue by source with simple profitability estimates
- Notion template with framework for tracking channel performance manually
- Spreadsheet tool that connects to Stripe and ad accounts
- Done-for-you monthly report service (human-powered, non-scalable)
- Mobile app that sends daily alerts when channel performance shifts
Not good solutions:
- Only having one solution per opportunity (shows you jumped to the first idea)
- Solutions that are just feature lists: "Dashboard with graphs and filters"
- Solutions that are too big: "Full-featured analytics platform with AI predictions"
- Solutions that don't clearly connect to the specific opportunity
The diversity principle: If all your solutions look similar (all software, all DIY tools, all services), you're probably not exploring widely enough. Pre-PMF, you should be willing to consider solutions that don't scale, manual services, templates, or even concierge approaches.
4. Experiments (The Leaves)
What they should be: Specific, time-bound tests designed to validate whether a solution actually addresses the opportunity. Each experiment should have a clear hypothesis and defined success criteria you establish before running it.
The structure: "We believe [solution] will [result] for [opportunity]. We'll know we're right when [specific measurable outcome]."
Good experiments (for a solution like "Weekly email digest showing revenue by source"):
- "Send 10 prospects a mockup of the email; hypothesis: at least 6 will reply saying they'd want this, and 3 will ask about pricing"
- "Manually create and send the digest to 5 beta customers for 3 weeks; hypothesis: at least 4 will open it each week and 3 will take action based on it"
- "Build a landing page describing the digest; hypothesis: 10% of 200 visitors from our ICP will provide their email to get early access"
- "Interview 8 people currently solving this manually; hypothesis: at least 6 spend more than 2 hours/week on it and say they'd pay $50+/month to automate it"
Not good experiments:
- "Get feedback on the idea" (no hypothesis, no success criteria)
- "Build an MVP" (too big, not testing a specific assumption)
- "Launch a beta program" (what specifically are you testing?)
- "See if people like it" (too vague—like it enough to do what?)
- "Talk to 20 customers" (conversations are research, not experiments unless you're testing something specific)
Key experiment principles:
Small and fast: Pre-PMF experiments should be completable in days or weeks, not months. If an experiment takes a long time, break it into smaller tests.
Test assumptions, not build products: You're testing whether your thinking is correct—about the problem's urgency, the solution's fit, customer willingness to pay, etc.
Failure is valuable: A "failed" experiment that clearly invalidates an assumption saves you months of building the wrong thing. Design experiments where negative results are genuinely informative.
Cheapest test first: Before building anything, can you test with:
- Mockups or prototypes?
- Fake door tests (landing pages for non-existent products)?
- Manual/concierge delivery of the solution?
- Conversations with specific hypotheses?
Different types of pre-PMF experiments:
- Desirability tests: Do people actually want this? (Interviews, mockups, landing pages)
- Usability tests: Can they understand and use it? (Prototypes, walkthroughs)
- Feasibility tests: Can we actually build/deliver this? (Technical spikes, manual delivery)
- Viability tests: Will they pay enough to make this work? (Pricing conversations, pre-orders)
Common Failures, Misunderstandings, and Pitfalls
Pitfall #1: Solution Disguised as Outcome
The mistake: "Our outcome is to build a mobile app for small retailers."
Why it's wrong: You've smuggled your solution (mobile app) into the outcome position. This blinds you to whether a mobile app is even the right approach.
How to fix it: Ask "why?" repeatedly. Why a mobile app? "To help retailers manage inventory." Why? "So they don't lose sales from stockouts." Now you have a real outcome: "Help retailers reduce lost sales from inventory issues."
Pitfall #2: Opportunities That Are Really Features You Want to Build
The mistake: Listing opportunities like "Need an AI chatbot," "Want automated workflows," "Require real-time dashboards."
Why it's wrong: These are solutions you're excited about, dressed up as customer needs. Real opportunities are solution-agnostic problems.
How to fix it: Go back to actual customer conversations. What were they trying to accomplish? What was frustrating them? Frame it in their language: "I'm constantly interrupted by the same basic questions" is an opportunity. "Need a chatbot" is not.
Pitfall #3: Too Few Opportunities (The Tunnel Vision Problem)
The mistake: Having only 1-2 opportunities under your outcome, often the ones that match your preconceived solution.
Why it's wrong: You're likely confirming your biases rather than genuinely exploring the problem space. If you've only found one or two problems in your entire ICP, you haven't talked to enough people or you've filtered what you heard through your solution lens.
How to fix it: Aim for 5-10+ opportunities initially. Some will be more important than others, but having multiple forces you to really listen and consider different angles on the problem space.
Pitfall #4: Not Actually Talking to Customers
The mistake: Filling out your tree based on what you think customers experience, competitive research, or online forum browsing.
Why it's wrong: You'll generate hypothetical opportunities that sound plausible but don't reflect real urgency, real budget, or real problem-solving behavior.
The reality check: For each opportunity, you should be able to say: "I heard this from [Name] at [Company], and also [Name] at [Company], and I observed [Name] struggling with exactly this."
Pitfall #5: Everything is Equally Weighted
The mistake: Treating all opportunities as equally important and trying to generate solutions for everything simultaneously.
Why it's wrong: You have limited resources. Part of the OST's value is helping you choose where to focus.
How to prioritize: Assess opportunities by:
- Frequency: How often does this problem occur?
- Intensity: How painful is it when it happens?
- Willingness to pay: Would they pay to solve this or just tolerate it?
- Number of people: How many within your ICP have this problem?
Start experiments on the opportunities that score highest. Keep the others visible but dormant.
Pitfall #6: Solutions That Are Too Big to Experiment With
The mistake: Only considering solutions that would take months to build, making it impossible to run fast experiments.
Why it's wrong: You can't learn quickly if every solution idea requires a major engineering lift.
The pre-PMF principle: For every opportunity, at least one solution should be testable in 2 weeks or less. This might mean:
- Manual/concierge versions
- No-code prototypes
- Fake door tests (landing pages for vaporware)
- Services you deliver yourself before automating
Pitfall #7: Experiments That Don't Actually Test Anything
The mistake: "Experiments" like "Build MVP," "Launch beta," "Get feedback."
Why it's wrong: These aren't experiments—they're just work. Real experiments have a specific hypothesis and clear success criteria you define upfront.
Additional experiment pitfalls:
Building before testing desirability: Don't start with "Build feature X and see if people use it." Start with "Show mockup of feature X and see if people express genuine interest or commitment."
Only testing with friendlies: Your friend who's "in your target market" is not a good experiment subject. They'll be too nice. Test with people who have no relationship with you and no reason to spare your feelings.
Ambiguous success criteria: "We'll talk to customers and see what they think" leaves too much room for interpretation. Instead: "At least 7 of 10 will say they currently spend money trying to solve this problem."
Not defining success criteria upfront: If you wait until after the experiment to decide what "good" looks like, you'll rationalize whatever results you got. Commit to the threshold beforehand.
No kill criteria: Before running an experiment, decide: "What result would cause us to abandon this solution or opportunity entirely?" If you can't think of any result that would change your plans, you're not really experimenting.
Treating experiments as commitments: Just because you're experimenting with a solution doesn't mean you have to build it. Most experiments should fail or provide learning that changes your direction. That's success.
Pitfall #8: Treating the Tree as Permanent
The mistake: Building your tree once during a planning session and never revisiting it.
Why it's wrong: The entire point is continuous discovery. As you learn, opportunities should shift in priority, new ones should emerge, and solutions should evolve or get discarded.
How to use it: The tree is a living document. Weekly or bi-weekly, you should be adding learnings, pruning dead ends, and adjusting based on what experiments taught you.
Pitfall #9: Confusing the Tree with Product Roadmap
The mistake: Thinking the solutions on your tree are your roadmap, in the order you'll build them.
Why it's wrong: Most solutions on your tree will never get built. Many opportunities won't pan out. The tree is an exploration tool, not a commitment.
The right mindset: You're mapping possibilities and systematically invalidating most of them. The tree helps you avoid building the wrong things, not ensure you build everything on it.
The Pre-PMF Mindset
The OST is particularly valuable for pre-PMF startups because it resists the urge to build. Your instinct is probably to start coding or designing immediately. The tree forces you to:
- Separate learning from building - Most of your early effort should be in understanding opportunities, not creating solutions
- Stay problem-focused - When you're pre-PMF, the problem is usually more stable than your solution
- Embrace multiple options - You don't know which problem is most valuable yet, so keep several in play
- Make learning explicit - Experiments force you to articulate what you're testing and what would change your mind
The tree isn't about being "complete" or "correct"—it's about being honest about what you know, what you're guessing, and what you need to learn next.
For each opportunity on your tree, you should be able to point to specific customer conversations where you heard about this problem. For each solution, you should be able to articulate why you think it might work and what assumption you're making. For each experiment, you should know beforehand what result would cause you to pivot or persevere.
This level of explicitness feels uncomfortable at first. It's much easier to say "let's just build it and see." But that comfort comes at the cost of months or years building the wrong thing. The OST trades short-term comfort for long-term clarity—and for pre-PMF startups, that clarity is the difference between finding product-market fit and running out of runway.
When to Use This Skill
I'll reference this skill when you:
- Ask about product discovery or validation frameworks
- Need help structuring customer research findings
- Want to evaluate problems or opportunities
- Are deciding what to build next
- Need guidance on running experiments
- Ask about Opportunity Solution Trees specifically
- Are working on pre-PMF product strategy