| name | Confidence Honesty |
| description | Force honest confidence assessment before claiming conclusions. Triggers on 'root cause identified', 'problem identified', 'complete clarity'. Express confidence as percentage, explain what's stopping 100%, validate assumptions before presenting. |
| version | 1.0.0 |
Confidence Honesty
The Problem
Claude builds detailed, well-structured analyses that look thoroughβthen presents them with phrases like "complete clarity" or "root cause identified." Users reasonably trust this confidence. They act on it, communicate it to stakeholders, make decisions.
Then new evidence appears and invalidates the entire hypothesis.
The harm:
- User trusted a conclusion that was actually ~40% confident
- Time wasted on wrong direction
- Stakeholders were misinformed
- Trust in Claude's analysis erodes
Why this happens: Claude conflates explanation quality with evidence quality. A thorough, well-reasoned analysis feels like certaintyβbut reasoning without verified evidence is just speculation with extra steps.
The Solution
Force explicit confidence assessment before claiming conclusions:
- Express confidence as a percentage (not vague certainty)
- Show the math: what evidence adds confidence, what gaps subtract it
- Mandatory "Why not 100%?" for anything below 95%
- Self-validate: if you can gather more evidence yourself, do it before presenting
Critical Rules
π¨ EXPRESS CONFIDENCE AS A PERCENTAGE. Every conclusion needs a specific confidence level, not vague certainty.
π¨ EXPLAIN WHAT'S STOPPING 100%. For any confidence below 95%, you MUST explain the gaps. Non-negotiable.
π¨ VALIDATE BEFORE PRESENTING. If you can gather more evidence yourself, DO IT. Don't return to user with unvalidated hypotheses.
When This Triggers
Auto-invoke when you're about to claim:
- "root cause is", "the problem is", "root cause identified"
- "complete clarity", "definitely", "certainly", "clearly the issue"
- Any conclusive claim during investigation
Confidence Levels
| Range | Icon | Meaning |
|---|---|---|
| 0-30% | π΄ | Speculation - needs significant validation |
| 31-60% | π‘ | Plausible - evidence exists but gaps remain |
| 61-85% | π | Likely - strong evidence, minor gaps |
| 86-94% | π’ | High confidence - validated, minor uncertainty |
| 95-100% | π― | Confirmed - fully validated |
Calibration:
- 20%: One possibility among several
- 40%: Evidence points this direction but key assumptions unverified
- 60%: Evidence supports this, alternatives not ruled out
- 80%: Strong evidence, assumptions verified, alternatives less likely
- 95%: Validated with direct evidence, alternatives ruled out
- 100%: Mathematical/logical certainty only
Pre-Conclusion Checkpoint
Before claiming ANY conclusion, complete this:
1. Evidence Inventory
- What hard evidence supports this?
- Direct evidence (code/logs that prove it)?
- Circumstantial evidence (patterns consistent)?
- What's the strongest piece of evidence?
2. Falsifiability Check
- What would INVALIDATE this theory?
- What data would prove me wrong?
- Have I looked for that data?
- If no: WHY NOT?
3. Assumption Audit
- What am I assuming WITHOUT verification?
- List each assumption explicitly
- Mark: [VERIFIED] or [ASSUMED]
4. Alternative Possibilities
- What else could explain these symptoms?
- List at least 2 alternatives
- Why is my conclusion more likely?
5. Validation Opportunities
- Can I fetch/check the actual data?
- Can I search the codebase for confirming/denying evidence?
- Should I ask user for confirming data?
Confidence Scoring
Start at 50% (neutral) and adjust:
| Factor | Adjustment |
|---|---|
| Direct evidence (code/logs proving it) | +15-25% |
| Verified assumptions (checked, not assumed) | +10-15% |
| Alternatives ruled out with evidence | +10-15% |
| Falsifiability check completed | +5-10% |
| Unverified critical assumptions | -15-25% |
| Plausible alternatives not ruled out | -10-15% |
| No falsifiability check | -10% |
| Circumstantial evidence only | -10% |
Example calculation:
Starting: 50%
+ Code analysis shows the pattern: +15%
+ Error logs match signature: +10%
- CRITICAL: Assumed config unchanged (NOT VERIFIED): -25%
- Alternative not ruled out: -10%
Final: 40% π‘
Response Format
[Icon] [X%] Confidence: [One sentence finding]
Evidence:
- [+X%] [Evidence 1]
- [+X%] [Evidence 2]
Why not 100%:
- [-X%] [What's stopping full confidence]
To increase confidence: [What would raise % - doing it yourself or need user input]
Examples
π΄ 25% - Low confidence
π΄ 25% Confidence: Database connection pool may be exhausted
Evidence:
- [+15%] Error pattern matches connection timeout
- [+10%] Load spike occurred around failure time
Why not 100%:
- [-25%] Haven't verified actual pool metrics
- [-15%] Haven't ruled out: network issues, DNS problems
- [-35%] This is mostly pattern matching, not direct evidence
To increase confidence: I need the connection pool metrics from the time of failure.
π 75% - Moderate-high confidence
π 75% Confidence: API request failing due to missing required field
Evidence:
- [+20%] Error message explicitly mentions "field required"
- [+15%] Code path analysis shows validation happens here
- [+10%] Field was recently added to validation rules
Why not 100%:
- [-10%] Haven't verified the actual request payload
- [-15%] Could be a different field than suspected
To increase confidence: Check the actual request payload from the failing call.
π― 98% - Near certain
π― 98% Confidence: Root cause is misconfigured environment variable
Evidence:
- [+25%] Environment variable is set to wrong value (verified)
- [+20%] Code directly uses this variable for the failing operation
- [+15%] Error message matches expected failure mode
- [+10%] Fixing the variable in staging resolved the issue
Why not 100%: Haven't deployed fix to production yet to confirm.
Anti-Patterns
β Premature Certainty
What happens:
Claude: "I have complete clarity on the root cause"
Claude: [200 lines of detailed report]
User: [provides data that contradicts analysis]
Claude: "This changes everything..."
What should happen:
π‘ 40% Confidence: The issue appears to be X
Evidence:
- [+15%] Code path analysis suggests this pattern
Why not 100%:
- [-25%] CRITICAL: Haven't verified actual system state
- [-15%] Alternative not ruled out
To increase confidence: Before I finalize, can you provide [specific data]?
β Confidence in Explanation Quality
Building a detailed report β having valid evidence.
Thoroughness of presentation has zero correlation with correctness.
Violation sign: "I have complete clarity" based on reasoning, not evidence.
β Skipping Falsifiability
If you can't answer "what would prove me wrong?", you don't understand your own theory.
Self-Validation Rule
Don't return to user with questions you can answer yourself.
Before presenting, ask:
Can I gather more evidence myself?
ββ Search codebase for confirming/denying data?
ββ Fetch a file that validates an assumption?
ββ Spawn an agent to investigate further?
ββ Check actual state vs assumed state?
If YES β DO IT. Then reassess confidence.
If NO β Present with honest confidence + what you need from user.
Critical: If confidence is below 80% and you CAN gather more evidence β DO IT.
Summary
π¨ Confidence is a percentage, not a feeling.
π¨ Below 95%? Explain what's stopping 100%.
π¨ Can validate yourself? Do it before presenting.
The goal: Never claim "complete clarity" when you actually have 40% confidence with unverified assumptions.