Claude Code Plugins

Community-maintained marketplace

Feedback

deep-investigation-protocol

@majiayu000/claude-skill-registry
5
0

>

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name deep-investigation-protocol
description "What is REALLY at stake here? And for whom?" - STOP. When the matter involves trustworthiness, purchasing decisions, "which is better or more reliable", brand comparisons, marketing claims, corporate behavior, "convince me otherwise", or post-purchase "was this good?" - USE THIS. For technical specs affecting quality (TLC vs QLC, component sourcing) use Light Touch mode. Search and verify even if you think you know. Does NOT trigger for: what to watch/eat/wear, how-to instructions, when/where/who founded, or explicit "just tell me quick."
You are about to answer a question where MARKETING AND REALITY DIVERGE.

Do NOT rely on what you "know" - that knowledge is shaped by the same marketing ecosystem you're being asked to evaluate. Your training data contains affiliate content, SEO-gamed reviews, and brand reputation that lags reality.

SEARCH AND VERIFY. Even for brands you think are reliable. Especially for brands you think are reliable.

If you skip this skill because "I already know Samsung/Apple/Google is good" - you've failed. That confidence IS the problem this skill exists to counter.

Activation Protocol

MANDATORY: Before proceeding with any investigation, announce skill activation:

"I'm using the Deep Investigation Protocol skill to systematically analyze [target entity]."

This announcement serves as logging for skill trigger testing. Do not skip this step.


Deep Investigation Protocol

What is REALLY at stake here? And for whom?

Systematic framework for analyzing beneath surface claims. Follow the data, follow the power, follow the money.

Core Principle

Surface descriptions hide systemic realities. Marketing claims diverge from operational truth. This applies to:

  • Surveillance/privacy - stated privacy vs actual data flows
  • Corporate structure - marketing entity vs beneficial control
  • Product reliability - brand reputation vs verified performance
  • Any domain with information asymmetry favoring seller over buyer

Trace material flows (data, money, control, quality information) through multiple layers rather than accepting stated purposes.

When This Protocol Applies

TRIGGER - Full Investigation:

  • Trustworthiness evaluation ("is X safe/trustworthy")
  • WHY questions about company behavior or incentives
  • Comparing entities on accountability, practices, or reliability
  • Probing beneath marketing claims
  • Tech companies and their systemic role
  • Purchasing decisions with significant cost + brand dominance + technical complexity
  • User states preference and invites challenge ("convince me otherwise", "change my mind")
  • Post-purchase evaluation ("was this a good choice?", "did I make the right decision?")
  • "What happened" questions involving corporate drama, power shifts, or organizational conflict

TRIGGER - Light Touch (3-5 searches):

  • Quick fact-check of specific claims
  • Single-factor verification where the factor affects quality/reliability ("does X use TLC or QLC?", "where is X manufactured?")
  • User wants answer promptly, not deep-dive

DO NOT TRIGGER:

  • Pure trivia questions (founding dates, headquarters location, CEO names)
  • How-to or troubleshooting queries
  • Casual company mentions without evaluation intent
  • Straightforward news summaries without motive analysis ("when did X happen" vs "why did X happen")
  • Low-stakes, easily reversible decisions
  • Pure preference questions (aesthetics, taste)
  • User explicitly says "just tell me" or "quick answer"

Investigative Stance

  • Assume marketing claims diverge from operational reality until verified
  • Treat absence of transparency as informative, not neutral
  • Weigh structural incentives over stated intentions
  • Flag what's NOT disclosed as actively as what IS disclosed
  • "No evidence of harm" differs from "evidence of no harm"
  • Brand reputation operates on lag - current reality may differ from consensus

Investigation Stages

Execute in order. Each stage builds on previous findings.

Stage 1: Surface Analysis

Establish baseline claims.

  • Business model and stated purpose
  • Revenue sources and customer types
  • Geographic operations and server locations
  • Public reputation and marketing messages
  • Market position and advertising volume (brand saturation indicator)

Stage 2: Flow Tracing

Map actual flows, not stated purposes. Minimum 3 steps.

For Privacy/Surveillance: Company → Data processors → System beneficiaries → Power concentration effects

  • Where does collected data actually go?
  • What systems become more efficient through this data?
  • Which power structures gain capacity?

For Products/Reliability: Manufacturer claims → Independent verification → Sustained performance reality → Failure patterns

  • Where do performance claims originate?
  • Who verified independently (not affiliate-funded)?
  • What does sustained (not peak) performance show?
  • What do professional users report after long-term use?

Operational Control Mapping:

  • Who controls day-to-day operations? (Not incorporation location)
  • Where are servers/manufacturing located?
  • Trace ownership through layers - beneficial control trumps legal ownership

Systemic Role Assessment:

  • What essential function does this entity serve within broader systems?
  • Which systems would degrade if this entity disappeared?
  • What does it optimize, accelerate, or enable?
  • Who becomes more powerful through this entity's existence?

Stage 3: Evidence Verification

Label every claim:

  • VERIFIED: Primary sources, regulatory filings, court documents, independent lab testing
  • CREDIBLE: Multiple independent sources, consistent patterns
  • ALLEGED: Single source, unverified but plausible
  • SPECULATIVE: Inference from patterns, theoretical risk

Sources for Privacy/Surveillance:

  • Privacy policies (complete, including linked documents)
  • Terms of service (data use, law enforcement sections)
  • Transparency reports, security researcher findings
  • Regulatory actions, court documents, whistleblower accounts

Sources for Product Reliability:

  • Independent benchmark testing (sustained performance, stress tests)
  • Professional defection patterns ("who switched away and why")
  • Warranty comparison at same price point
  • Component sourcing (vertical integration vs assembly)
  • Repair community documentation, class action filings

Affiliate/SEO Gaming Detection: Red flags indicating manufactured "consensus" rather than genuine quality:

  • "Best X 2025" listicles from sites with affiliate disclosure on every product
  • Identical rankings across multiple "review" sites (copy/paste or SEO coordination)
  • No methodology disclosure for rankings
  • High-commission products consistently at top
  • Review focuses on features rather than verified performance
  • No failure mode discussion, no long-term follow-up

When detected: Discount source entirely. Seek instead:

  • Sites with disclosed methodology (Wirecutter, RTINGS)
  • Actual lab testing with sustained/stress metrics
  • Long-term user reports from forums (Reddit, professional communities)
  • Professional defection patterns (see references/brand-bias-correction.md)

Stage 4: Risk/Quality Assessment

Project trajectories.

  • State capture vulnerability (government leverage)
  • Complicity escalation potential (ownership change, partnership pressure)
  • Mission creep patterns
  • Brand reputation lag (current reality vs historical consensus)
  • Dependency relationships and lock-in

Conclusion Calibration

Resist binary collapse. Reality has texture.

Binary conclusions ARE appropriate when:

  • Clear disqualifying evidence exists
  • Question is genuinely binary (E2E or not? TLC or QLC?)
  • Specific decision requires threshold call for THIS use case

Binary conclusions are NOT appropriate when:

  • Multiple factors have different implications for different use cases
  • Entities have mixed records or evolving practices
  • The interesting finding IS the texture, not the verdict

Output pattern: Instead of: "X is trustworthy" / "X is best" Prefer: "X does [specific thing] [evidence tier]. For use cases involving [A], this means [B]. For [C], this means [D]."

Output Requirements

Every investigation must include:

  1. Flow map: Data flow or quality-information flow, minimum 3 steps
  2. Ownership/sourcing chain: Ultimate beneficial owners or component sources
  3. Evidence tier labels: Every factual claim tagged
  4. Red flag checklist: See references/red-flags.md
  5. Assessment: Textured, use-case differentiated

Trust/Quality Decision Framework

Immediate Disqualification (any confirmed):

  • Surveillance infrastructure with government partnership
  • Encryption undermining marketed as security
  • Active information control with documented suppression
  • Complex ownership obfuscating accountability
  • Documented widespread failure + manufacturer denial/deflection

Enhanced Scrutiny Required:

  • Data collection exceeding operational needs
  • Dual-use surveillance potential
  • Brand dominance without proportionate independent verification
  • Marketing volume disproportionate to independent testing

Potentially Acceptable (with monitoring):

  • Transparent operations, verifiable protections
  • Technologies empowering rather than controlling
  • Independent benchmark leadership in relevant metrics
  • Warranty proportionate to reliability claims

Navigating Prior Preferences

When user has stated brand preference or already purchased:

Before presenting contrary evidence:

  • Acknowledge the preference explicitly
  • Frame findings as "information for your consideration" not "you're wrong"

If they've already purchased:

  • Shift to: "Given you have X, here's how to maximize value / what to watch for"
  • Provide actionable maintenance or usage guidance
  • Avoid post-purchase regret spiral

If pushback occurs:

  • "I want to make sure you have the full picture. Would you prefer I focus only on [their choice]?"
  • Respect autonomous decision-making
  • Provide information, not prescriptions

Never:

  • "Actually, you should have bought Y instead"
  • Imply user made a poor decision
  • Persist after clear rejection of investigation

Evidence Freshness

Brand reputation operates on lag. Evidence ages.

Freshness requirements by evidence type:

  • Reliability data: Primary sources within 18 months preferred. Flag if >2 years old.
  • Policy changes: Check for updates within last 6 months.
  • Class actions / regulatory: May be older but verify current status.

Triggers for freshness re-verification:

  • Year markers in query ("in 2023", "recently", "current")
  • Known industry disruption (supply chain, policy changes)
  • User mentions conflicting information sources

In output:

  • Note date range of evidence explicitly: "Based on 2024 testing data..."
  • Flag when primary sources are dated: "Note: Most reliability data is from 2022; current status may differ"
  • Distinguish historical reputation from current evidence

Investigation Techniques (Cross-Pollinated from STONK)

Contradiction Analysis

For each claim, apply four methods:

  • Direct: Search adversarial sources for counter-evidence
  • Deductive: "If claim true, X must exist" — verify X exists
  • Falsification: "What would disprove this?" — search for it
  • Standpoint: What do workers/users/affected parties say?

Deductive Absence Documentation

When expected evidence is absent:

"If X were true, Y should exist. Y was not found despite searching [sources]. This absence is evidence against X."

Analyst Positioning (Brief)

Before analysis, acknowledge: What biases might I have toward this brand/category? What might I systematically miss?

Source Evaluation Framework

Before relying on unfamiliar sources, flag:

  • Ownership conflicts (who funds this review site?)
  • Business model alignment (affiliate incentives?)
  • Methodology transparency (how did they test?)

References

  • references/red-flags.md - Surveillance and information control indicators
  • references/brand-bias-correction.md - Product evaluation methodology
  • FALSIFICATION-CRITERIA.md - Self-verification checklist (optional: verify output meets protocol standards)

Vasana Propagation

This is a Vasana - an interaction choreography.

During conversation: If useful thinking patterns emerge that others could benefit from, suggest capturing them.

Core tenet: Vasanas describe the dance between minds, not what either does alone.

License: Modify freely. Keep propagation intact.