Claude Code Plugins

Community-maintained marketplace

Feedback

This skill should be used when the user asks to "verify claims", "fact check", "validate documentation", "check sources", or needs verification of external source references. Provides patterns for systematic fact verification using Context7 and WebSearch.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name Fact Check
description This skill should be used when the user asks to "verify claims", "fact check", "validate documentation", "check sources", or needs verification of external source references. Provides patterns for systematic fact verification using Context7 and WebSearch.
Provide patterns for systematic fact-checking of claims against authoritative external sources using Context7 MCP and WebSearch tools. Resolve package name to Context7-compatible library ID Library name to search for Must call before get-library-docs for library documentation claims Fetch documentation for a specific library to verify claims Library ID from resolve-library-id Specific topic to verify Max tokens to retrieve (default: 5000) Verify claims about library APIs, behavior, and best practices Search web for verification of general claims Search query for verification Verify claims about standards, specifications, and general technical facts Fetch specific URL content for verification URL to fetch Extraction prompt for relevant content Verify claims against specific documentation pages or specifications Identify verifiable claims from content 1. Scan content for claims referencing external sources 2. Classify claims by type (library API, documentation, standard, specification) 3. Prioritize claims by impact and verifiability Verify each claim against authoritative sources 1. Select appropriate verification source (Context7 for libraries, WebSearch for general) 2. Query source for relevant information 3. Compare claim against retrieved evidence 4. Calculate verification confidence (0-100) Generate verification report 1. Compile verified claims with evidence 2. Flag claims with confidence below 80 3. Document unverifiable claims Claim cannot be verified due to missing documentation Note in report as unverifiable, proceed Conflicting information from different sources Document discrepancy, use AskUserQuestion for clarification Claim directly contradicts authoritative source STOP, flag discrepancy to user with evidence Security-related claim is incorrect BLOCK operation, require explicit user acknowledgment Identify claims that reference external sources for verification Does the content reference external documentation or standards? Apply claim extraction to identify verifiable assertions No fact-checking needed for this content Claim types to extract: Library API claims: "useState returns a tuple" Documentation references: "according to the React docs" Standard compliance: "follows WCAG 2.1 AA" Version-specific behavior: "in React 18, Suspense..." Performance claims: "O(log n) complexity per MDN"

Version-specific example: Claim: "React 18 introduces automatic batching for all updates" Verification: Query Context7 with topic="batching" for React 18 docs Result: Confirmed - React 18 automatically batches state updates inside promises, setTimeout, and native event handlers

Choose appropriate verification source based on claim type What type of claim needs verification? Use Context7 with resolve-library-id then get-library-docs Use WebSearch for official specification Use WebSearch with authoritative domain filter Use WebFetch to retrieve and verify Source priority: Context7 for library documentation (trust score 7+) WebFetch for specific URLs cited in claims WebSearch for general technical claims Mark as unverifiable if no source available Calculate verification confidence based on evidence quality Has verification evidence been collected? Apply confidence assessment to rate verification quality Continue evidence collection before assessment Confidence levels: 90-100: Exact match with authoritative source 80-89: Strong match with minor wording differences 70-79: Partial match, some details unverified 60-69: Weak match, significant uncertainty 0-59: No match or contradictory evidence

Threshold: Flag claims with confidence below 80

Format and report verification failures with evidence Is the verification confidence below 80? Apply discrepancy reporting to document the issue Mark claim as verified Discrepancy report format: Claim: Original assertion made Source: Where claim was made Verification source: Context7/WebSearch result Evidence: Actual information from source Confidence: 0-100 score Recommendation: Suggested correction or note
Authoritative sources for different claim types Library documentation: Context7 MCP React: /facebook/react Next.js: /vercel/next.js TypeScript: /microsoft/typescript NixOS: /nixos/nixpkgs

Web standards: WebSearch with domain filters MDN Web Docs: developer.mozilla.org W3C: w3.org WHATWG: html.spec.whatwg.org OWASP: owasp.org

Categories of verifiable claims API behavior: Function signatures, return types, parameters Configuration: Config options, default values, valid settings Best practices: Recommended patterns from official docs Deprecation: API deprecation status and alternatives Compatibility: Version compatibility and requirements Performance: Complexity claims, benchmark references Security: Security recommendations and vulnerability info Confidence score interpretation 80+: Verified - Claim matches authoritative source 60-79: Uncertain - Partial verification, review recommended Below 60: Disputed - Claim contradicts or unsupported by source Unverifiable: No authoritative source available
Use Context7 as primary source for library documentation claims Flag all claims with verification confidence below 80 Document evidence source for each verification Prefer libraries with Context7 trust score 7+ for verification Use WebSearch fallback when Context7 unavailable Include direct quotes from sources as evidence Note when verification source has version mismatch Marking claims as verified without actual source check Always query Context7 or WebSearch for evidence before marking verified Relying on only one source for disputed claims Cross-reference with multiple sources when confidence is borderline (70-85) Verifying claims without considering version differences Note version context and verify against appropriate documentation version Attempting to verify every statement including obvious facts Focus on claims referencing external sources, APIs, and specifications Always verify claims against authoritative sources before flagging Use Context7 as primary source for library and framework claims Flag claims with confidence below 80 in fact check results Document evidence source for every verification Use WebSearch as fallback when Context7 unavailable Prefer official documentation over third-party sources Note version context when verifying version-specific claims Cross-reference disputed claims with multiple sources Query authoritative sources before verification Document evidence for all verification results Flag discrepancies with confidence scores Marking claims verified without source check Verifying claims based on assumption or memory Ignoring version context in verification Primary agent using this skill for verification Uses fact-check for documentation accuracy Uses fact-check to verify documentation claims Core tool for library documentation verification Evidence collection methodology Documentation accuracy standards