Claude Code Plugins

Community-maintained marketplace

Feedback

Six Thinking Hats for Testing

@proffesor-for-testing/agentic-qe
70
0

Apply Edward de Bono's Six Thinking Hats methodology to software testing for comprehensive quality analysis. Use when designing test strategies, conducting test retrospectives, analyzing test failures, evaluating testing approaches, or facilitating testing discussions. Each hat provides a distinct testing perspective: facts (White), risks (Black), benefits (Yellow), creativity (Green), emotions (Red), and process (Blue).

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name Six Thinking Hats for Testing
description Apply Edward de Bono's Six Thinking Hats methodology to software testing for comprehensive quality analysis. Use when designing test strategies, conducting test retrospectives, analyzing test failures, evaluating testing approaches, or facilitating testing discussions. Each hat provides a distinct testing perspective: facts (White), risks (Black), benefits (Yellow), creativity (Green), emotions (Red), and process (Blue).

Six Thinking Hats for Testing

What This Skill Does

Applies Edward de Bono's Six Thinking Hats thinking framework to software testing contexts, enabling structured exploration of quality concerns from six distinct perspectives. Each "hat" represents a specific mode of thinking that helps teams systematically analyze testing scenarios, uncover blind spots, and make better quality decisions.

Prerequisites

  • Basic understanding of software testing concepts
  • Familiarity with your application under test
  • Team collaboration skills (for group sessions)
  • Open mindset to different perspectives

Quick Start

Basic Usage Pattern

# 1. Define the testing focus
FOCUS="Authentication module test strategy"

# 2. Apply each hat sequentially (3-5 minutes each)
# White Hat: What test data/metrics do we have?
# Red Hat: What are our gut feelings about quality?
# Black Hat: What could go wrong? What are the risks?
# Yellow Hat: What are the benefits of our approach?
# Green Hat: What creative test approaches could we try?
# Blue Hat: How should we organize our testing process?

# 3. Document insights from each perspective
# 4. Synthesize into actionable test plan

Quick Example - API Testing

White Hat (Facts): We have 47 API endpoints, 30% test coverage, 12 integration tests, average response time 120ms.

Black Hat (Risks): No authentication tests, rate limiting untested, error handling incomplete, edge cases missing.

Yellow Hat (Benefits): Fast baseline tests exist, good documentation, CI/CD integrated, team has API testing experience.

Green Hat (Creative): Could generate tests from OpenAPI spec, use contract testing with Pact, chaos testing for resilience, property-based testing for edge cases.

Red Hat (Emotions): Team feels confident about happy paths but anxious about security. Frustrated by flaky network tests.

Blue Hat (Process): Prioritize security tests first, add contract testing next sprint, dedicate 20% time to exploratory testing, schedule weekly test reviews.


The Six Hats Explained for Testing

🤍 White Hat - Facts & Data

Focus: Objective information, test metrics, data

Testing Questions:

  • What test coverage do we currently have?
  • What metrics are we tracking? (pass/fail rate, execution time, defect density)
  • What test data is available?
  • What test environments exist?
  • What is our defect history?
  • What performance benchmarks do we have?
  • How many test cases exist? (manual vs automated)

Deliverable: Quantitative testing baseline

Example Output:

Coverage: 67% line coverage, 45% branch coverage
Test Suite: 1,247 unit tests, 156 integration tests, 23 E2E tests
Execution Time: Unit 3min, Integration 12min, E2E 45min
Defects: 23 open (5 critical, 8 major, 10 minor)
Environments: Dev, Staging, Production
Last Release: 98.5% pass rate, 2 critical bugs in production

🖤 Black Hat - Risks & Cautions

Focus: Critical judgment, potential problems, risks

Testing Questions:

  • What could go wrong in production?
  • What are we NOT testing?
  • Where are the coverage gaps?
  • What assumptions might be wrong?
  • What edge cases are we missing?
  • What security vulnerabilities exist?
  • What performance bottlenecks could occur?
  • What integration points could fail?
  • What technical debt impacts quality?

Deliverable: Comprehensive risk assessment

Example Output:

HIGH RISKS:
- No load testing (potential production outage)
- Authentication edge cases untested (security vulnerability)
- Database failover never tested (data loss risk)
- Mobile app on older OS versions untested (user impact)

MEDIUM RISKS:
- Flaky tests reducing CI/CD confidence
- Manual regression testing taking 2 days
- Limited error logging in production

ASSUMPTIONS TO CHALLENGE:
- "Users will always have internet" (offline mode untested)
- "Data migrations will be backward compatible" (rollback untested)

💛 Yellow Hat - Benefits & Optimism

Focus: Positive thinking, opportunities, value

Testing Questions:

  • What's working well in our testing?
  • What strengths can we leverage?
  • What value does our testing provide?
  • What opportunities exist to improve quality?
  • What tools/skills do we have?
  • What best practices are we following?
  • What quick wins are available?

Deliverable: Strengths and opportunities assessment

Example Output:

STRENGTHS:
- Strong CI/CD pipeline with automated testing
- Team has expertise in test automation
- Good test data management practices
- Stakeholders value quality and testing

OPPORTUNITIES:
- Reuse existing test framework for new features
- Leverage AI tools for test generation
- Expand performance testing to prevent issues
- Share test patterns across teams

QUICK WINS:
- Add smoke tests to reduce production incidents
- Automate manual regression tests (save 2 days/release)
- Implement contract testing (improve team coordination)

💚 Green Hat - Creativity & Alternatives

Focus: New ideas, creative solutions, alternatives

Testing Questions:

  • What innovative testing approaches could we try?
  • How else could we test this?
  • What if we completely changed our approach?
  • What emerging testing techniques could we adopt?
  • How can we make testing more efficient/effective?
  • What tools or frameworks could we explore?
  • How can we test the "untestable"?

Deliverable: Innovative testing ideas

Example Output:

CREATIVE IDEAS:

1. AI-Powered Test Generation
   - Use LLMs to generate test cases from requirements
   - Generate edge cases from code analysis
   - Auto-generate test data with realistic patterns

2. Chaos Engineering
   - Randomly terminate services to test resilience
   - Inject network latency to test timeout handling
   - Corrupt data to test error recovery

3. Property-Based Testing
   - Define properties that should always hold
   - Generate thousands of random inputs
   - Uncover edge cases humans wouldn't think of

4. Visual Regression Testing
   - Screenshot comparison for UI changes
   - AI-powered visual anomaly detection
   - Cross-browser visual testing

5. Testing in Production
   - Canary deployments with real user traffic
   - Feature flags for gradual rollout
   - Synthetic monitoring for proactive detection

6. Exploratory Testing Sessions
   - Time-boxed unscripted testing
   - Bug bash events with whole team
   - User journey walkthroughs

❤️ Red Hat - Emotions & Intuition

Focus: Feelings, hunches, instincts (no justification needed)

Testing Questions:

  • How do you FEEL about the quality?
  • What's your gut reaction to the current test coverage?
  • Where do you feel uneasy or anxious?
  • What gives you confidence in the system?
  • What frustrates you about testing?
  • Where do you sense hidden problems?
  • What excites you about testing improvements?

Deliverable: Emotional landscape of testing

Example Output:

FEELINGS ABOUT QUALITY:

Confident About:
- "The unit tests make me feel safe to refactor"
- "I trust the CI/CD pipeline"
- "The API tests are solid"

Anxious About:
- "I have a bad feeling about the authentication flow"
- "Something feels off about the payment processing"
- "I'm worried about the database migration"

Frustrated By:
- "The test suite is too slow"
- "Flaky tests waste my time"
- "Manual testing feels like groundhog day"

Excited About:
- "The new test framework looks promising"
- "AI test generation could save us so much time"

Gut Instincts:
- "I don't think we're testing multi-user scenarios enough"
- "The error handling feels brittle"
- "Production is going to surprise us"

Note: Red Hat requires NO justification. Intuition often catches issues logic misses.


🔵 Blue Hat - Process & Organization

Focus: Metacognition, process control, orchestration

Testing Questions:

  • What testing process should we follow?
  • How should we organize our testing efforts?
  • What's our test strategy?
  • How do we prioritize testing?
  • What's the agenda for this testing discussion?
  • How do we measure testing success?
  • What's the next step?
  • How do we integrate testing into development?

Deliverable: Structured test plan and process

Example Output:

TESTING PROCESS PLAN:

1. Test Strategy Definition
   Objective: Establish testing approach for Q2 release
   Approach: Risk-based testing with automation priority
   Success Criteria: 80% automated coverage, <5% production defects

2. Testing Prioritization
   P0: Security, authentication, payment processing
   P1: Core user journeys, data integrity
   P2: Performance, edge cases
   P3: UI polish, nice-to-have features

3. Testing Workflow
   Week 1-2: White Hat (gather facts), Black Hat (risk analysis)
   Week 3-4: Green Hat (design creative tests), Blue Hat (plan execution)
   Week 5-8: Execute tests, Yellow Hat (optimize), Red Hat (validate feel)
   Week 9: Final Blue Hat (retrospective, lessons learned)

4. Meeting Cadence
   - Daily: Test execution standup (15 min)
   - Weekly: Hat rotation session (90 min, different hat each week)
   - Bi-weekly: Test metrics review
   - Monthly: Testing retrospective

5. Decision Points
   - Go/No-Go decision requires all hats completed
   - Black Hat veto power for critical risks
   - Green Hat ideas evaluated monthly
   - Red Hat concerns investigated within 48 hours

6. Documentation
   - Test strategy document (Blue Hat)
   - Risk register (Black Hat)
   - Test metrics dashboard (White Hat)
   - Innovation backlog (Green Hat)

Step-by-Step Guide

Phase 1: Preparation (10 minutes)

Step 1: Define the Testing Focus

Be specific about what you're analyzing:
✅ GOOD: "Test strategy for user authentication feature"
✅ GOOD: "Root cause analysis of payment processing bug"
✅ GOOD: "Evaluate testing approach for API v2 migration"
❌ BAD: "Improve our testing" (too vague)

Step 2: Gather Context

Collect relevant information:
- Current test coverage reports
- Recent defect trends
- Test execution metrics
- Stakeholder concerns
- Technical architecture diagrams

Step 3: Choose Format

  • Solo Session: Apply hats sequentially, 3-5 min each (30 min total)
  • Team Session: Rotate hats as group, 10 min each (60 min total)
  • Async Session: Each person contributes to all hats over 2-3 days

Phase 2: Hat Rotation (Main Work)

Approach 1: Sequential (Recommended for Solo)

Apply each hat in order, spending dedicated time in each mode:

## White Hat Session (5 minutes)
Focus: Facts only, no opinions
Output: [List all objective testing data]

## Red Hat Session (3 minutes)
Focus: Gut feelings, no justification
Output: [Capture instincts and emotions]

## Black Hat Session (7 minutes)
Focus: Critical analysis, risks
Output: [Comprehensive risk list]

## Yellow Hat Session (5 minutes)
Focus: Positive aspects, opportunities
Output: [Strengths and possibilities]

## Green Hat Session (7 minutes)
Focus: Creative alternatives
Output: [Innovative testing ideas]

## Blue Hat Session (5 minutes)
Focus: Process and next steps
Output: [Action plan and structure]

Approach 2: Cycling (Good for Team Discussions)

Cycle through hats multiple times on different aspects:

Round 1: All hats on "Current State"
Round 2: All hats on "Proposed Solution A"
Round 3: All hats on "Proposed Solution B"
Round 4: All hats on "Implementation Plan"

Approach 3: Parallel (For Written Collaboration)

Team members work on different hats simultaneously, then share:

Person 1: White Hat (gather all facts)
Person 2: Black Hat (identify all risks)
Person 3: Yellow Hat (find opportunities)
Person 4: Green Hat (brainstorm alternatives)
Person 5: Red Hat (gut check from fresh eyes)
Facilitator: Blue Hat (synthesize findings)

Phase 3: Synthesis (15 minutes)

Step 1: Review All Hat Outputs

Create a summary document with all six perspectives:

# Testing Analysis: [Feature/Issue Name]

## 🤍 Facts (White Hat)
[Objective data and metrics]

## ❤️ Feelings (Red Hat)
[Team instincts and emotions]

## 🖤 Risks (Black Hat)
[Potential problems and gaps]

## 💛 Benefits (Yellow Hat)
[Strengths and opportunities]

## 💚 Creative Ideas (Green Hat)
[Innovative approaches]

## 🔵 Action Plan (Blue Hat)
[Process and next steps]

Step 2: Identify Patterns

Look for:

  • Conflicts: Black Hat risks vs Yellow Hat opportunities (trade-offs to evaluate)
  • Alignments: Red Hat feelings matching Black Hat risks (trust intuition)
  • Gaps: White Hat missing data needed for Blue Hat decisions
  • Innovations: Green Hat ideas that address Black Hat concerns

Step 3: Prioritize Actions

Use Blue Hat to create prioritized action plan:

## Immediate Actions (This Sprint)
1. [Critical Black Hat risk] - Address [specific risk]
2. [White Hat gap] - Collect [missing data]
3. [Green Hat quick win] - Implement [creative idea]

## Short-Term (Next 2-4 Weeks)
1. [Yellow Hat opportunity]
2. [Green Hat innovation]
3. [Red Hat concern]

## Long-Term (Next Quarter)
1. [Strategic improvement]
2. [Process optimization]
3. [Capability building]

Use Cases & Examples

Use Case 1: Test Strategy for New Feature

Context: Designing test approach for new real-time chat feature

White Hat (Facts):

  • Feature: WebSocket-based chat with 100+ concurrent users
  • Stack: Node.js backend, React frontend
  • Timeline: 6-week sprint
  • Team: 2 developers, 1 QE
  • Current: No WebSocket testing experience

Black Hat (Risks):

  • WebSocket connection stability untested
  • Concurrent user simulation challenging
  • Race conditions in message ordering
  • Browser compatibility (Safari WebSocket quirks)
  • No production WebSocket monitoring

Yellow Hat (Benefits):

  • Team eager to learn WebSocket testing
  • Good existing React testing framework
  • Can reuse API testing infrastructure
  • Early adopter advantage

Green Hat (Creative):

  • Socket.io test framework
  • Simulate 1000+ concurrent users with k6
  • Chaos testing: randomly disconnect clients
  • Visual testing for message ordering
  • Property-based testing for message invariants
  • Production shadowing (test in parallel)

Red Hat (Emotions):

  • Nervous about real-time complexity
  • Excited about learning new tech
  • Confident in team capability
  • Worried about timeline pressure

Blue Hat (Action Plan):

  1. Week 1: Research WebSocket testing tools (White Hat)
  2. Week 2: Spike Socket.io test framework (Green Hat)
  3. Week 3-4: Build test suite (unit, integration, load)
  4. Week 5: Chaos testing and edge cases (Black Hat)
  5. Week 6: Production monitoring setup (Blue Hat)
  6. Decision Point: Go/No-Go based on load test results

Use Case 2: Flaky Test Analysis

Context: 30% of CI/CD runs fail due to flaky tests

White Hat (Facts):

  • 47 tests marked as flaky (out of 1,200 total)
  • Failure rate: 30% of CI runs have at least one flaky test
  • Most flaky: API integration tests (network timeouts)
  • Impact: 2-hour delay per failed CI run
  • Cost: ~15 hours/week developer time investigating

Black Hat (Risks):

  • Team losing trust in test suite
  • Real bugs might be masked as "just flaky"
  • Developers skip test failures ("probably flaky")
  • Technical debt growing (band-aid fixes)
  • Risk of disabling tests (losing coverage)

Yellow Hat (Benefits):

  • We've identified the problem (awareness)
  • Team motivated to fix (pain point)
  • Good test infrastructure exists
  • Can learn flaky test patterns
  • Opportunity to improve test stability practices

Green Hat (Creative):

  • Quarantine flaky tests (separate CI job)
  • Retry with exponential backoff
  • Visual dashboard showing flaky test trends
  • AI-powered flaky test detection
  • Test in parallel to detect race conditions
  • Automatic flaky test regression (if test becomes flaky again)
  • Invest in test observability tools

Red Hat (Emotions):

  • Frustration: "These tests waste my time"
  • Distrust: "I ignore test failures now"
  • Anxiety: "Are we shipping bugs?"
  • Hope: "We can fix this"

Blue Hat (Action Plan):

  1. Immediate (This Week):

    • Enable test retries (max 3) in CI
    • Create flaky test dashboard
    • Document known flaky tests
  2. Short-Term (2 Weeks):

    • Dedicate 1 developer to fix top 10 flakiest tests
    • Add test stability metrics to definition of done
    • Implement quarantine for new flaky tests
  3. Long-Term (1 Month):

    • Establish flaky test SLO (<5% flaky rate)
    • Training: writing stable tests
    • Invest in test observability platform
    • Continuous monitoring and maintenance

Use Case 3: Production Bug Retrospective

Context: Critical payment bug reached production despite testing

White Hat (Facts):

  • Bug: Double-charging users in edge case (race condition)
  • Impact: 47 users affected, $12,340 refunded
  • Detection: 4 hours after deployment (user reports)
  • Root cause: Concurrent payment processing not tested
  • Test coverage: 85% overall, but missing concurrency tests

Black Hat (Why It Happened):

  • No load testing for payment flow
  • Race condition not considered in test design
  • Missing integration test for concurrent requests
  • Production monitoring missed the pattern
  • Assumption: "Database transactions prevent duplicates" (incorrect)

Yellow Hat (What Went Well):

  • Detected and fixed within 24 hours
  • Rollback process worked smoothly
  • Customer support handled well
  • Team transparent about issue
  • Incident documentation excellent

Green Hat (Prevention Ideas):

  • Chaos engineering for payment system
  • Concurrency testing framework
  • Property-based testing: "No duplicate charges"
  • Production traffic replay in staging
  • Automated canary deployments
  • Real-time anomaly detection
  • Synthetic transaction monitoring

Red Hat (Team Feelings):

  • Guilty: "We should have caught this"
  • Defensive: "The requirements didn't mention concurrency"
  • Vulnerable: "What else are we missing?"
  • Determined: "This won't happen again"

Blue Hat (Action Plan):

  1. Immediate:

    • Add concurrency tests for payment flow
    • Enable production monitoring for duplicate charges
    • Document race condition test patterns
  2. This Sprint:

    • Concurrency testing framework (property-based)
    • Load testing for critical flows
    • Update test strategy to include concurrency
  3. Next Quarter:

    • Chaos engineering capability
    • Production traffic replay
    • Team training: distributed systems testing
  4. Continuous:

    • Monthly "What could go wrong?" sessions (Black Hat)
    • Quarterly chaos testing exercises
    • Incident retrospectives with Six Hats

Integration with Existing QE Skills

The Six Thinking Hats complements other QE skills:

With agentic-quality-engineering

Use Six Hats to:
- Design autonomous testing strategies (Green Hat for creative approaches)
- Evaluate agent performance (White Hat metrics, Red Hat intuition)
- Identify risks in agent coordination (Black Hat)

With risk-based-testing

Use Six Hats to:
- Black Hat: Identify risks comprehensively
- White Hat: Quantify risk probability and impact
- Blue Hat: Prioritize risk mitigation

With exploratory-testing-advanced

Use Six Hats to:
- Green Hat: Generate exploratory testing charters
- Red Hat: Follow testing intuition
- Blue Hat: Structure exploration sessions

With performance-testing

Use Six Hats to:
- White Hat: Baseline performance metrics
- Black Hat: Identify bottlenecks and limits
- Green Hat: Creative performance optimization

With api-testing-patterns

Use Six Hats to:
- White Hat: API contract facts
- Black Hat: API failure modes
- Green Hat: Creative contract testing approaches

With context-driven-testing

Six Hats IS a context-driven approach:
- Each hat adapts to the testing context
- No prescribed "best practice"
- Acknowledges emotions and intuition
- Balances multiple perspectives

Advanced Techniques

Technique 1: Hat Personas for Testing

Assign team members to "wear" specific hats based on their strengths:

White Hat Specialist: Data analyst, metrics expert
Black Hat Specialist: Security expert, pessimist, devil's advocate
Yellow Hat Specialist: Product manager, optimist, evangelist
Green Hat Specialist: Innovation lead, creative thinker
Red Hat Specialist: UX researcher, empathy expert
Blue Hat Specialist: Test manager, facilitator, strategist

Rotate personas quarterly to develop well-rounded thinking.


Technique 2: Testing Checklists per Hat

White Hat Testing Checklist:

  • Test coverage metrics collected
  • Pass/fail rates documented
  • Performance benchmarks established
  • Defect trends analyzed
  • Test execution time tracked
  • Environment inventory created

Black Hat Testing Checklist:

  • Failure modes identified
  • Edge cases documented
  • Security threats assessed
  • Integration points analyzed
  • Assumptions challenged
  • Technical debt evaluated

Yellow Hat Testing Checklist:

  • Testing strengths identified
  • Quick wins documented
  • Reusable assets cataloged
  • Team capabilities assessed
  • Opportunities listed

Green Hat Testing Checklist:

  • 10+ creative test ideas generated
  • Alternative approaches explored
  • Emerging tools researched
  • Innovation backlog created

Red Hat Testing Checklist:

  • Team gut feelings captured
  • Confidence levels assessed
  • Anxieties documented
  • Intuitions trusted

Blue Hat Testing Checklist:

  • Test strategy defined
  • Process documented
  • Priorities established
  • Action plan created
  • Metrics defined
  • Next steps clear

Technique 3: Hat Rotation Cadence

Daily Stand-up Hats:

  • White Hat: What did you test yesterday? (facts)
  • Red Hat: How confident are you? (feelings)
  • Blue Hat: What will you test today? (process)

Sprint Planning Hats:

  • White Hat: What's the current test coverage?
  • Black Hat: What are the biggest testing risks?
  • Green Hat: What innovative approaches should we try?
  • Blue Hat: What's our testing strategy for this sprint?

Sprint Retrospective Hats:

  • White Hat: What were our testing metrics?
  • Red Hat: How did we feel about quality?
  • Black Hat: What testing failures occurred?
  • Yellow Hat: What testing successes did we have?
  • Green Hat: What should we try next sprint?
  • Blue Hat: What process improvements should we make?

Quarterly Review Hats:

  • Full Six Hats session on overall testing strategy
  • Each hat gets 30-45 minutes
  • Document and publish findings
  • Update test strategy based on insights

Technique 4: Anti-Patterns to Avoid

❌ Hat Mixing:

BAD: "The tests are passing (White Hat), but I'm worried (Red Hat),
      because we're missing edge cases (Black Hat)"

This mixes three hats simultaneously. Separate them:

✅ White Hat: "Our tests have 85% coverage, 1,200 passing tests"
✅ Red Hat: "I feel anxious about quality"
✅ Black Hat: "We're missing concurrent user edge cases"

❌ Justifying Red Hat:

BAD: "I feel worried because the tests are flaky" (justification)
✅ GOOD: "I feel worried" (no justification needed)

Red Hat is intuition. Don't rationalize it. Trust it. Investigate it separately.

❌ Skipping Hats:

BAD: "We don't need Green Hat, we already know what to do"

Every hat reveals insights. Even if you think you know, wear all hats.

❌ Rushing Hats:

BAD: 5 minutes total for all six hats
✅ GOOD: 5 minutes per hat minimum (30 minutes total)

❌ Judging Hat Contributions:

BAD: "That's a stupid Black Hat comment"
✅ GOOD: Accept all contributions, evaluate later in Blue Hat

❌ Using Hats as Weapons:

BAD: "I'm wearing my Black Hat to shoot down your idea"
✅ GOOD: "Let's all wear Black Hat to find risks we can mitigate"

Troubleshooting

Issue: Team Resists Wearing Hats

Symptoms: Eye-rolling, "This is silly", reluctance to participate

Solution:

  1. Start with async/individual sessions (less awkward)
  2. Don't use physical hats (optional prop)
  3. Rename to "Perspectives Method" if "hats" feels childish
  4. Show ROI: "This found X bugs in Y minutes"
  5. Start with Black Hat (teams usually like risk analysis)
  6. Make it optional: "Try it for one sprint"

Issue: All Hats Sound the Same

Symptoms: Every hat produces similar outputs

Solution:

  1. Use timer: Force strict 5-minute boundaries
  2. Use hat-specific prompts (see templates)
  3. Have facilitator enforce hat discipline
  4. Practice individually first (develop hat-thinking muscle)
  5. Review example outputs for each hat

Issue: Conflicts Between Hats

Symptoms: Black Hat says "This won't work" vs Yellow Hat says "This will work"

Solution:

  • This is GOOD! It reveals trade-offs
  • Black Hat: "Risk of flaky tests with this approach"
  • Yellow Hat: "Benefit of faster execution with this approach"
  • Blue Hat: Synthesize: "Prototype with retries to mitigate flakiness while keeping speed"

Issue: Green Hat Produces No Ideas

Symptoms: Team stuck, no creative ideas

Solution:

  1. Use prompts: "What if we had unlimited time?" "What would Elon Musk do?"
  2. Research: Look at what other teams/companies do
  3. Crazy ideas first: "No idea is too wild during Green Hat"
  4. Quantity over quality: Generate 20 ideas, even if 18 are bad
  5. Combine ideas: Mix-and-match different approaches

Issue: Red Hat Feels Uncomfortable

Symptoms: Team silent during Red Hat, "I don't want to share feelings"

Solution:

  1. Make it anonymous: Write feelings on sticky notes
  2. Frame it: "Professional instincts" instead of "emotions"
  3. Go first as facilitator: Model vulnerability
  4. Emphasize: Red Hat has caught many bugs ("trust your gut")
  5. Make it optional: Some people prefer to skip Red Hat
  6. Use scale: "Rate your confidence 1-10" (easier than feelings)

Issue: Takes Too Long

Symptoms: Six Hats session takes 3+ hours

Solution:

  1. Use timers: Strict 5 minutes per hat
  2. Narrow focus: Be specific about what you're analyzing
  3. Use templates: Pre-formatted hat outputs
  4. Parallel work: Async contributions before meeting
  5. Just-in-Time hats: Use only 2-3 hats as needed:
    • Quick risk check: Just Black Hat (5 min)
    • Ideation: Just Green Hat (10 min)
    • Feelings check: Just Red Hat (3 min)

Templates & Resources

Template 1: Solo Six Hats Session (30 minutes)

# Six Hats Analysis: [Topic]
Date: [Date]
Facilitator: [Name]
Focus: [Specific testing question or challenge]

---

## 🤍 White Hat - Facts (5 minutes)
**Objective**: List only facts, data, metrics. No opinions.

Facts:
-
-
-

Data:
-
-

---

## ❤️ Red Hat - Feelings (3 minutes)
**Objective**: Gut instincts, emotions, intuitions. No justification needed.

I feel:
-
-
-

My intuition says:
-

---

## 🖤 Black Hat - Risks (7 minutes)
**Objective**: Critical judgment, potential problems, what could go wrong.

Risks:
-
-
-

Gaps:
-

Assumptions to challenge:
-

---

## 💛 Yellow Hat - Benefits (5 minutes)
**Objective**: Positive aspects, opportunities, strengths.

Strengths:
-
-

Opportunities:
-
-

Quick wins:
-

---

## 💚 Green Hat - Creativity (7 minutes)
**Objective**: New ideas, alternatives, creative solutions. Go wild!

Ideas:
1.
2.
3.
4.
5.

Crazy ideas (that might work):
-
-

---

## 🔵 Blue Hat - Process (5 minutes)
**Objective**: Action plan, next steps, process.

Summary:
-

Prioritized actions:
1. [Immediate]
2. [Short-term]
3. [Long-term]

Next steps:
-

---

**Key Insights**:
-

**Decisions**:
-

Template 2: Team Six Hats Session (90 minutes)

# Team Six Hats Session
Date: [Date]
Facilitator: [Name]
Participants: [Names]
Topic: [Testing challenge or decision]

## Pre-Session (10 minutes)
- [ ] Define focus clearly
- [ ] Gather relevant data (White Hat prep)
- [ ] Set timer for each hat
- [ ] Explain rules to new participants

---

## Session Agenda (60 minutes)

### 🤍 White Hat (10 minutes)
Each person shares one fact. Go around the table until facts exhausted.

Documented facts:
-

### ❤️ Red Hat (5 minutes)
Silent individual reflection (2 min), then sharing (3 min). No justification.

Team feelings:
-

### 🖤 Black Hat (12 minutes)
Brainstorm risks on whiteboard. Group similar items.

Risk categories:
-

### 💛 Yellow Hat (8 minutes)
What's working? What can we leverage?

Strengths identified:
-

### 💚 Green Hat (15 minutes)
Rapid-fire idea generation. No idea too crazy. Build on others' ideas.

Ideas generated:
-

### 🔵 Blue Hat (10 minutes)
Synthesize findings into action plan with owner and timeline.

Actions:
| Action | Owner | Timeline | Priority |
|--------|-------|----------|----------|
|        |       |          |          |

---

## Post-Session (20 minutes)
- [ ] Document findings
- [ ] Share summary with stakeholders
- [ ] Schedule follow-up
- [ ] Add actions to backlog

---

## Retrospective
What worked:
-

What to improve:
-

Next session changes:
-

Template 3: Hat-Specific Prompts

White Hat Prompts:

  • What test metrics do we have?
  • What is our current coverage?
  • How many tests exist? (unit, integration, E2E)
  • What is our defect rate?
  • What environments are available?
  • What data do we need but don't have?

Red Hat Prompts:

  • How confident do you feel about quality?
  • What makes you anxious?
  • Where do you have a bad feeling?
  • What gives you confidence?
  • What frustrates you?
  • If this were your product, would you ship it?

Black Hat Prompts:

  • What could go wrong in production?
  • What are we NOT testing?
  • What assumptions might be wrong?
  • What edge cases could break?
  • What security holes exist?
  • What happens if [component] fails?

Yellow Hat Prompts:

  • What's going well?
  • What strengths can we leverage?
  • What opportunities exist?
  • What value does our testing provide?
  • What quick wins are available?
  • What are we doing better than competitors?

Green Hat Prompts:

  • How else could we test this?
  • What if we had unlimited time/budget?
  • What would [company] do?
  • What emerging tech could we use?
  • What if we started from scratch?
  • What's the opposite of our current approach?

Blue Hat Prompts:

  • What's our testing strategy?
  • How should we prioritize?
  • What's the next step?
  • How do we measure success?
  • What's the decision-making process?
  • How do we track progress?

Resources & Further Learning

Books

  • "Six Thinking Hats" by Edward de Bono - Original methodology
  • "Serious Creativity" by Edward de Bono - Applied creativity techniques
  • "Explore It!" by Elisabeth Hendrickson - Exploratory testing (uses lateral thinking)
  • "Lessons Learned in Software Testing" by Kaner, Bach, Pettichord - Context-driven testing

Articles

Related QE Skills

  • context-driven-testing: Choose practices based on context
  • exploratory-testing-advanced: Apply creativity to testing
  • risk-based-testing: Prioritize testing by risk (Black Hat)
  • holistic-testing-pact: Comprehensive quality model (all hats)

Tools

  • Miro/Mural: Virtual whiteboard for remote Six Hats sessions
  • Sticky notes: Physical Six Hats sessions
  • Timer apps: Enforce hat boundaries
  • Recording: Capture Red Hat intuitions

Tips for Success

  1. Practice Solo First: Get comfortable with each hat individually before facilitating group sessions.

  2. Start Small: Try one or two hats first (Black + Yellow for quick risk/opportunity analysis).

  3. Use Timers: Strict time boundaries prevent endless discussions.

  4. Separate Hats Clearly: Don't mix perspectives. Discipline improves quality.

  5. Trust Red Hat: Intuition often catches issues analysis misses.

  6. Document Everything: Capture all outputs, especially Green Hat wild ideas.

  7. Revisit Periodically: Apply Six Hats quarterly to major testing challenges.

  8. Adapt to Context: Solo vs team, 15 min vs 2 hours, all hats vs selective hats.

  9. Make It Safe: Especially for Red Hat, create psychological safety.

  10. Close with Blue Hat: Always end with process and action plan.


Created: 2025-11-13 Category: Testing Methodologies Difficulty: Intermediate Estimated Time: 30-90 minutes (depending on format) Best Used With: context-driven-testing, exploratory-testing-advanced, risk-based-testing