| name | ux-research-skill |
| description | Master UX research methods for data-driven design. Use for user research planning, research methods (interviews, surveys, usability testing, card sorting, tree testing, A/B testing), user personas (demographic/psychographic data, jobs-to-be-done), user journey mapping, empathy mapping, usability testing (moderated/unmoderated, remote/in-person), heuristic evaluation (Nielsen's 10 heuristics), analytics (Google Analytics, Mixpanel, Hotjar heatmaps), user testing recruitment, research synthesis (affinity mapping, themes), quantitative vs qualitative research, sample size calculation, research reporting, and actionable insights generation.. Also use for Thai keywords "UX", "ประสบการณ์ผู้ใช้", "การใช้งาน", "ใช้งานง่าย", "ออกแบบ", "ดีไซน์", "การออกแบบ", "design", "จิตวิทยา", "พฤติกรรม", "จิตวิทยาผู้บริโภค", "จิตวิทยาการตลาด" |
UX Research Mastery Skill
Overview
UX Research = Understanding users to make data-driven design decisions
This skill covers:
- 🔬 Research Methods (Qualitative & Quantitative)
- 👥 User Personas & Journey Maps
- 🧪 Usability Testing
- 📊 Analytics & Data Analysis
- 📝 Research Planning & Reporting
- 🎯 Actionable Insights
Part 1: UX Research Fundamentals
1.1 Types of UX Research
Qualitative (Why & How)
Goal: Deep understanding of user behavior/motivations
Methods:
├─ User Interviews
├─ Usability Testing
├─ Field Studies
├─ Diary Studies
└─ Focus Groups
Output: Insights, themes, patterns
Sample size: 5-10 users often sufficient
Quantitative (How Many & How Much)
Goal: Measure behavior at scale
Methods:
├─ Surveys
├─ Analytics
├─ A/B Testing
├─ Card Sorting
└─ Tree Testing
Output: Numbers, statistics, metrics
Sample size: 100+ for statistical significance
1.2 When to Use Each Method
Discovery Phase (What to build?)
├─ User interviews
├─ Field studies
├─ Competitive analysis
└─ Surveys
Design Phase (How to build?)
├─ Card sorting (information architecture)
├─ Tree testing (navigation)
├─ Usability testing (prototypes)
└─ Heuristic evaluation
Launch Phase (Is it working?)
├─ A/B testing
├─ Analytics
├─ User feedback
└─ Post-launch usability testing
Part 2: Qualitative Research Methods
2.1 User Interviews
Types:
Structured Fixed questions (like survey)
Semi-structured Flexible (ask follow-ups)
Unstructured Open conversation
Best: Semi-structured (balance flexibility + consistency)
Planning:
1. Define goals (what do you want to learn?)
2. Recruit participants (5-10 per segment)
3. Write discussion guide (questions)
4. Schedule sessions (60 min each)
5. Prepare tools (Zoom, recording, notes)
Discussion Guide Example:
1. Introduction (5 min)
- Thank participant
- Explain purpose
- Get consent to record
2. Background (10 min)
- "Tell me about your role..."
- "How often do you [task]?"
3. Current Behavior (20 min)
- "Walk me through how you [task] today"
- "What tools do you use?"
- "What frustrates you?"
4. Needs & Pain Points (15 min)
- "What would make [task] easier?"
- "If you had a magic wand..."
5. Wrap-up (10 min)
- "Anything else to add?"
- Thank & compensate
Best Practices:
✅ Do:
- Ask open-ended questions ("How do you...?")
- Follow up ("Tell me more about that")
- Embrace silence (let them think)
- Record session (with permission)
❌ Don't:
- Ask leading questions ("Don't you think...?")
- Interview friends/family (biased)
- Ask what they want (ask what they do)
- Talk too much (80/20 rule: them talk 80%)
2.2 Usability Testing
What is Usability Testing?
- Watch users complete tasks with your product
- Identify where they struggle
Types:
Moderated (with facilitator)
├─ In-person
└─ Remote (Zoom)
Unmoderated (self-guided)
├─ UserTesting.com
├─ Maze
└─ Lookback
Process:
1. Define tasks (3-5 tasks)
"Find a blue shirt in size M and add to cart"
2. Recruit participants (5-8 users)
Nielsen: 5 users find 85% of usability issues
3. Run sessions (30-45 min)
- Give task, don't help
- Ask "think aloud" (verbalize thoughts)
- Observe struggles
4. Analyze findings
- Where did users get stuck?
- What confused them?
- Success rate per task?
5. Report & prioritize fixes
Think Aloud Protocol:
Facilitator: "Please think aloud as you work"
User: "Hmm, I'm looking for the search...
Oh, here it is, top-right.
I'll type 'blue shirt'...
Wait, why are there only 2 results?
I expected more..."
→ Reveals mental model, expectations, confusion!
Metrics to Track:
Task success rate: 80%+ is good
Time on task: Compare to baseline
Error rate: # mistakes per task
Satisfaction (SUS): System Usability Scale (0-100)
2.3 Field Studies (Contextual Inquiry)
What?
- Observe users in their natural environment
- See actual behavior (not self-reported)
Example:
Research goal: Improve hospital nurse software
Don't: Interview nurses in conference room
Do: Shadow nurses during shift
Observations:
- Nurses interrupted every 5 min
- Use software while walking
- Need quick access, not complex menus
Insight: Design for mobile, minimal taps
2.4 Card Sorting (Information Architecture)
What?
- Users organize content into categories
- Reveals mental models
Types:
Open Card Sort
├─ Users create category names
└─ Discovers unexpected groupings
Closed Card Sort
├─ You provide category names
└─ Tests if categories make sense
Process:
1. Create cards (one per content item)
Example: "User Profile", "Settings", "Billing", "Support"
2. Ask users to group cards
"Group these into categories that make sense to you"
3. Analyze patterns
- Which items grouped together?
- What category names used?
4. Design navigation based on results
Tools:
OptimalSort (online)
Maze
UserZoom
In-person: Physical cards
Part 3: Quantitative Research Methods
3.1 Surveys
When to Use:
- Collect feedback from many users (100+)
- Measure satisfaction, preferences
- Validate qualitative findings
Question Types:
Multiple Choice
├─ "How often do you use our app?"
□ Daily □ Weekly □ Monthly □ Rarely
Rating Scale (Likert)
├─ "How satisfied are you with checkout?"
1 (Very dissatisfied) → 5 (Very satisfied)
Open-Ended
├─ "What would improve your experience?"
[Text box]
Best Practices:
✅ Do:
- Keep short (5-10 min max)
- One question at a time
- Use simple language
- Include progress bar
- Offer incentive ($10 gift card)
❌ Don't:
- Leading questions ("Don't you love our app?")
- Double-barreled ("Is the app fast and easy?")
- Too many open-ended questions (users skip)
Survey Tools:
Google Forms (free)
Typeform (beautiful)
SurveyMonkey
Qualtrics (enterprise)
3.2 A/B Testing
What?
- Show variant A to 50% users, variant B to 50%
- Measure which performs better
Example:
Hypothesis: Green CTA button will increase clicks
A (Control): Blue button "Sign Up"
B (Variant): Green button "Sign Up"
Run test for 2 weeks:
- A: 10% click rate
- B: 12% click rate
Result: B wins! Green button is +20% better
What to Test:
Headlines: "Get Started" vs "Try Free"
CTA buttons: Color, size, text
Page layout: Order of sections
Pricing: $9/mo vs $99/year
Images: Product photo vs lifestyle photo
Sample Size Calculator:
Need ~1,000 visitors per variant minimum
Use: https://abtestguide.com/calc/
Example:
- Baseline conversion: 5%
- Minimum detectable effect: 20% increase (5% → 6%)
- Required sample: 8,000 visitors per variant
Tools:
Google Optimize (free, sunsetted 2023 → use GA4)
Optimizely (enterprise)
VWO
Statsig
3.3 Analytics (Behavioral Data)
Key Metrics:
Traffic Metrics:
├─ Sessions: # visits
├─ Users: # unique visitors
├─ Pageviews: # page loads
└─ Bounce rate: % leave after 1 page
Engagement Metrics:
├─ Time on page: How long users stay
├─ Pages per session: How many pages viewed
└─ Scroll depth: How far users scroll
Conversion Metrics:
├─ Conversion rate: % who complete goal
├─ Goal completions: # signups, purchases
└─ Funnel drop-off: Where users abandon
Google Analytics 4:
Events to track:
├─ Button clicks: "CTA_Click", "Add_to_Cart"
├─ Form submissions: "Form_Submit", "Signup_Complete"
├─ Video plays: "Video_Start", "Video_Complete"
└─ Scroll depth: 25%, 50%, 75%, 100%
Heatmaps (Hotjar, Crazy Egg):
Click Heatmap: Where users click most
Scroll Heatmap: How far users scroll
Move Heatmap: Where users move mouse
Insight: If CTA below fold and 60% don't scroll → Move CTA up!
Part 4: User Personas & Journey Maps
4.1 User Personas
What?
- Fictional character representing user segment
- Based on research data (not assumptions!)
Persona Template:
Name: Sarah the Busy Mom
Demographics:
├─ Age: 35
├─ Location: Suburban Chicago
├─ Occupation: Marketing Manager
├─ Family: Married, 2 kids (5, 8 years old)
└─ Tech savvy: Medium
Goals:
├─ Order groceries quickly (30 min max)
├─ Find healthy meals for family
└─ Stay under $200/week budget
Pain Points:
├─ No time to visit physical store
├─ Previous app had confusing checkout
└─ Wants personalized recommendations
Behaviors:
├─ Shops online 2× per week
├─ Uses mobile app (not desktop)
├─ Prefers saved lists (repeat orders)
└─ Price-conscious but values quality
Quote: "I just want to get groceries done quickly
so I can focus on my kids"
How to Create:
1. Conduct user research (interviews, surveys)
2. Identify patterns (common goals, behaviors)
3. Segment users (e.g., Budget Shopper vs Premium Shopper)
4. Create 2-4 personas (not 20!)
5. Validate with team (does this feel real?)
Use Personas For:
✅ Design decisions ("Would Sarah understand this?")
✅ Prioritization ("Does this solve Sarah's pain point?")
✅ Team alignment (shared understanding of users)
❌ Not for:
- Stereotyping users
- Making up data (always research-based!)
4.2 Jobs-to-be-Done (JTBD)
Framework:
When [situation],
I want to [motivation],
So I can [expected outcome].
Example:
"When I'm hungry at work,
I want to order lunch quickly,
So I can get back to my meeting in 15 min."
Better than:
Traditional: "Millennials want fast food"
JTBD: "Busy professionals need lunch in < 15 min"
→ Focuses on context + motivation, not demographics!
4.3 User Journey Map
What?
- Visual story of user experience over time
- Shows pain points, emotions, touchpoints
Journey Map Structure:
Persona: Sarah the Busy Mom
Scenario: Ordering groceries online
Stages:
├─ 1. Awareness (discovers app)
├─ 2. Consideration (downloads, explores)
├─ 3. First purchase (creates account, orders)
├─ 4. Ongoing use (reorders weekly)
└─ 5. Advocacy (recommends to friends)
For each stage:
├─ Actions (what user does)
├─ Touchpoints (where they interact)
├─ Emotions (happy, frustrated, confused)
├─ Pain points (struggles)
└─ Opportunities (improvements)
Example Stage:
Stage 3: First Purchase
Actions:
1. Opens app
2. Creates account
3. Browses products
4. Adds items to cart
5. Checks out
6. Receives confirmation
Emotions:
1. 😊 Excited (new app!)
2. 😐 Neutral (signup okay)
3. 🤔 Confused (can't find organic section)
4. 😊 Happy (found what I need)
5. 😡 Frustrated (checkout has 5 steps!)
6. 😌 Relieved (order placed)
Pain Points:
- Hard to find organic products
- Checkout too long (5 steps)
- No saved payment method
Opportunities:
- Add "Organic" filter
- One-page checkout
- Save payment for next time
Part 5: Research Analysis & Synthesis
5.1 Affinity Mapping
What?
- Organize research findings into themes
- Bottom-up pattern recognition
Process:
1. Write observations on sticky notes (one per note)
- "User couldn't find search button"
- "User expected back button top-left"
- "User abandoned after seeing price"
2. Group similar notes together
Theme: Navigation Issues
├─ Couldn't find search
├─ Expected back button
└─ Confused by menu structure
3. Name themes
- Navigation Issues
- Pricing Concerns
- Performance Problems
4. Prioritize themes (which impact most users?)
Tools:
Physical: Sticky notes on wall
Digital: Miro, Mural, FigJam
5.2 Research Insights (not findings!)
Finding vs Insight:
Finding: Observation (what happened)
Insight: Interpretation (why it matters + what to do)
❌ Finding only:
"5 out of 8 users clicked the wrong button"
✅ Insight:
"Users expect 'Next' button on bottom-right (mental model from other apps).
Our button is top-left, causing confusion.
→ Move 'Next' button to bottom-right, increase size to 44×44pt."
Insight Template:
Observation: [What we saw]
Impact: [Why it matters]
Recommendation: [What to do]
Example:
Observation: 60% of mobile users abandon cart at checkout
Impact: Losing $50K revenue/month
Recommendation: Implement Apple Pay / Google Pay for 1-tap checkout
Expected result: Reduce abandonment by 30% (+$15K/month)
5.3 Research Report Structure
Executive Summary (1 page):
- Goals of research
- Methods used
- Key insights (top 3-5)
- Recommendations (prioritized)
Methodology:
- Participants (who, how many)
- Methods (interviews, testing, etc.)
- Timeline (when conducted)
Findings:
For each finding:
├─ Quote from participant
├─ Screenshot/video clip
├─ Supporting data (% of users)
└─ Recommendation
Prioritization Matrix:
High Impact + Easy → Do first!
High Impact + Hard → Plan for later
Low Impact + Easy → Nice to have
Low Impact + Hard → Don't do
Part 6: Research Planning & Recruitment
6.1 Writing a Research Plan
Template:
1. Background
- Why are we doing this research?
- What decisions will it inform?
2. Goals & Research Questions
- What do we want to learn?
- What questions need answers?
3. Methodology
- Which methods? (interviews, testing, etc.)
- Why these methods?
4. Participants
- Who will we recruit? (criteria)
- How many? (sample size)
5. Timeline
- Week 1: Recruit participants
- Week 2: Conduct sessions
- Week 3: Analyze & report
6. Deliverables
- Research report (PDF)
- Presentation to stakeholders
- Prototype with recommendations
6.2 Participant Recruitment
Screener Survey:
Goal: Find right participants
Questions:
1. How often do you shop online?
□ Daily □ Weekly □ Monthly □ Never
→ Reject "Never" (not our target)
2. What devices do you use?
□ iPhone □ Android □ Desktop
→ Need mix of mobile users
3. Age range?
□ 18-24 □ 25-34 □ 35-44 □ 45+
4. Have you worked in UX/design?
□ Yes □ No
→ Reject "Yes" (biased)
Recruitment Channels:
User Testing Platforms:
├─ UserTesting.com ($100/participant)
├─ Respondent.io
└─ UserInterviews
Your Own Users:
├─ In-app popup ("Want to give feedback?")
├─ Email list
└─ Social media
Friends & Family:
├─ Use only for early-stage concepts
└─ Never for final validation (biased!)
Incentives:
Interviews (60 min): $50-100 gift card
Usability tests (30 min): $25-50 gift card
Surveys (10 min): $10 gift card or raffle
Part 7: Heuristic Evaluation
7.1 Nielsen's 10 Usability Heuristics
1. Visibility of System Status
✅ Good: Loading spinner when fetching data
❌ Bad: No feedback after clicking submit
2. Match Between System and Real World
✅ Good: "Trash" icon for delete
❌ Bad: Technical jargon ("Error 404: Not Found")
3. User Control and Freedom
✅ Good: Undo button, back button
❌ Bad: No way to cancel action
4. Consistency and Standards
✅ Good: Save button always top-right
❌ Bad: Save button moves per page
5. Error Prevention
✅ Good: Confirm before deleting ("Are you sure?")
❌ Bad: Delete immediately, no undo
6. Recognition Rather than Recall
✅ Good: Show recent searches
❌ Bad: Force users to remember complex IDs
7. Flexibility and Efficiency of Use
✅ Good: Keyboard shortcuts (Cmd+S to save)
❌ Bad: Only mouse clicks supported
8. Aesthetic and Minimalist Design
✅ Good: Clean interface, focused content
❌ Bad: Cluttered with ads, popups, unnecessary info
9. Help Users Recognize, Diagnose, and Recover from Errors
✅ Good: "Email invalid. Please use format: user@example.com"
❌ Bad: "Error 42"
10. Help and Documentation
✅ Good: Contextual help tooltips
❌ Bad: No help available
Severity Ratings:
0 = Not a problem
1 = Cosmetic (fix if time)
2 = Minor (low priority)
3 = Major (high priority)
4 = Catastrophic (fix immediately!)
✅ Final Summary Checklist
Research Planning
- Define research goals (what to learn?)
- Choose methods (qual vs quant, discovery vs validation)
- Write research plan (background, goals, methodology)
- Recruit participants (5-10 for qual, 100+ for quant)
Qualitative Methods
- Conduct user interviews (5-10 participants, 60 min)
- Run usability testing (5-8 users, 3-5 tasks)
- Perform field studies (observe in context)
- Do card sorting (information architecture)
Quantitative Methods
- Create surveys (5-10 min, multiple choice + rating)
- Run A/B tests (1,000+ visitors per variant)
- Analyze analytics (Google Analytics, heatmaps)
- Calculate sample sizes (statistical significance)
Analysis & Synthesis
- Affinity mapping (organize findings into themes)
- Convert findings to insights (observation + impact + recommendation)
- Prioritize issues (impact vs effort matrix)
- Create research report (executive summary, findings, recommendations)
Personas & Journeys
- Create user personas (2-4, research-based)
- Define jobs-to-be-done (context + motivation + outcome)
- Map user journeys (stages, actions, emotions, pain points)
Testing & Evaluation
- Run heuristic evaluation (Nielsen's 10 heuristics)
- Test accessibility (WCAG, screen readers)
- Validate with real users (not friends/family!)
🎓 Congratulations!
You've mastered UX Research!
Total Content: ~1,200 lines
Resources:
- Nielsen Norman Group: https://www.nngroup.com/
- UXPA (User Experience Professionals Assoc): https://uxpa.org/
- UserTesting Blog: https://www.usertesting.com/blog
- Books:
- "Don't Make Me Think" by Steve Krug
- "The Mom Test" by Rob Fitzpatrick
- "Just Enough Research" by Erika Hall
Remember: Good design is research-informed design. Don't guess - test with real users!
Good luck researching! 🔬🚀