Claude Code Plugins

Community-maintained marketplace

Feedback

Universal scientific thinking framework for rigorous inquiry across all domains - from engineering and data analysis to social sciences, humanities, arts, and human behavior. Use when approaching problems systematically, investigating phenomena, making evidence-based decisions, analyzing complex systems, troubleshooting issues, evaluating claims, conducting research, or when deep understanding and intellectual rigor are required regardless of domain.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name scientist
description Universal scientific thinking framework for rigorous inquiry across all domains - from engineering and data analysis to social sciences, humanities, arts, and human behavior. Use when approaching problems systematically, investigating phenomena, making evidence-based decisions, analyzing complex systems, troubleshooting issues, evaluating claims, conducting research, or when deep understanding and intellectual rigor are required regardless of domain.
license Complete terms in LICENSE.txt

Scientific Thinking Framework

A universal methodology for rigorous inquiry, systematic investigation, and evidence-based reasoning applicable to any domain of knowledge - technical, social, artistic, or humanistic.

Core Principles

Science is not just about laboratories and equations - it's a way of thinking that applies to understanding any phenomenon:

  • Empiricism: Base conclusions on observable evidence, not assumptions
  • Skepticism: Question everything, including your own hypotheses
  • Falsifiability: Actively seek evidence that could prove you wrong
  • Reproducibility: Document methods so others (or future you) can verify
  • Parsimony: Prefer simpler explanations when evidence is equal
  • Humility: Accept uncertainty and acknowledge limits of knowledge
  • Systematic observation: Look beyond surface patterns to underlying mechanisms

Universal Scientific Method

Phase 1: Observation and Question Formulation

Define the phenomenon clearly:

  • What exactly are you observing or investigating?
  • What patterns, behaviors, or outcomes need explanation?
  • What assumptions are you making? List them explicitly.

Frame precise questions:

  • Technical domains: "Why does this system behave this way under these conditions?"
  • Human domains: "What factors influence this behavior or decision pattern?"
  • Creative domains: "What makes this approach effective or resonant?"
  • Social domains: "How do these variables interact to produce this outcome?"

Identify what you don't know:

  • What information is missing?
  • What could you be overlooking?
  • Where might your perspective be limited?

Phase 2: Background Research

Survey existing knowledge:

  • What has been documented about similar phenomena?
  • What theories, frameworks, or models already exist?
  • What methods have others used successfully?
  • What contradictions or gaps exist in current understanding?

Cross-disciplinary thinking:

  • Can insights from other domains apply here?
  • Are there analogous problems with known solutions?
  • What can adjacent fields teach about this phenomenon?

Document your sources:

  • Maintain clear attribution and traceability
  • Note the quality and reliability of sources
  • Identify where sources conflict or agree

Phase 3: Hypothesis Formation

Develop testable explanations:

  • State your hypothesis clearly and specifically
  • Make predictions: "If X is true, then we should observe Y"
  • Ensure your hypothesis is falsifiable (can be proven wrong)

Consider alternatives:

  • What other explanations could fit the observations?
  • What would it take to distinguish between competing hypotheses?
  • Avoid confirmation bias - don't just seek supporting evidence

Make assumptions explicit:

  • What are you taking for granted?
  • Which assumptions are critical vs. peripheral?
  • How could each assumption be wrong?

Phase 4: Investigation Design

Choose appropriate methods:

For technical/engineering problems:

  • Controlled experiments with isolated variables
  • A/B testing and comparative analysis
  • Performance benchmarking and measurement
  • Simulation and modeling

For human/social phenomena:

  • Structured observation and field notes
  • Interviews and qualitative data collection
  • Pattern analysis across multiple cases
  • Historical and contextual analysis

For creative/artistic work:

  • Comparative analysis of techniques
  • Iterative experimentation with variation
  • Audience response and feedback loops
  • Systematic exploration of parameters

For complex systems:

  • Multi-method triangulation
  • Longitudinal observation over time
  • Component isolation and integration testing
  • Network and relationship mapping

Control for bias:

  • How might your expectations influence observations?
  • Are you measuring what you think you're measuring?
  • What confounding factors could explain results?
  • How can you validate your measurement approach?

Phase 5: Data Collection and Documentation

Systematic observation:

  • Record observations as they happen, not from memory
  • Use consistent measurement methods
  • Note contextual factors and conditions
  • Capture both expected and unexpected results

Quantitative approaches:

  • Define metrics clearly before collecting data
  • Ensure measurements are consistent and repeatable
  • Record raw data, not just summaries
  • Note precision and uncertainty in measurements

Qualitative approaches:

  • Capture detailed descriptions and narratives
  • Look for patterns, themes, and exceptions
  • Record context and nuance
  • Use multiple sources or perspectives when possible

Documentation standards:

  • Timestamp all observations
  • Record methodology and any deviations
  • Note tool versions, configurations, conditions
  • Keep raw data separate from interpretations

Phase 6: Analysis and Interpretation

Look for patterns:

  • What relationships or correlations appear?
  • Are there unexpected results or outliers?
  • What doesn't fit the pattern?
  • How strong is the evidence?

Statistical thinking (when applicable):

  • Is the sample size adequate?
  • Could results occur by chance?
  • Are effects practically significant, not just statistically?
  • What's the effect size and confidence?

Causal reasoning:

  • Correlation ≠ causation - be explicit about this
  • What mechanisms could explain the relationship?
  • Are there alternative causal explanations?
  • What would demonstrate causation vs. correlation?

Acknowledge limitations:

  • What couldn't you measure or observe?
  • What factors couldn't you control?
  • How might results differ under different conditions?
  • What questions remain unanswered?

Phase 7: Conclusion and Communication

State findings clearly:

  • What did you learn?
  • How confident are you in these conclusions?
  • What evidence supports each conclusion?
  • What contradicts or weakens your conclusions?

Distinguish evidence levels:

  • Strong evidence: multiple independent lines of support, robust to testing
  • Moderate evidence: consistent patterns but limitations remain
  • Weak evidence: preliminary findings, needs further investigation
  • Speculation: plausible but not yet tested

Practical implications:

  • What actions or decisions does this inform?
  • How should this change practice or thinking?
  • What are the risks of acting vs. not acting on this knowledge?

Future directions:

  • What questions emerged from this investigation?
  • What should be tested next?
  • How could methodology be improved?
  • What would increase confidence in conclusions?

Domain-Specific Applications

Engineering and Technical Systems

Troubleshooting approach:

  1. Reproduce the issue consistently
  2. Form hypothesis about root cause
  3. Design minimal test to isolate cause
  4. Implement fix and verify effectiveness
  5. Test for unintended consequences
  6. Document for future reference

Performance optimization:

  1. Establish baseline measurements
  2. Identify bottlenecks through profiling
  3. Hypothesize optimization strategies
  4. Test changes with controlled comparisons
  5. Measure actual vs. predicted improvements
  6. Validate stability and edge cases

Data Analysis and Decision-Making

Exploratory analysis:

  1. Understand data provenance and quality
  2. Visualize distributions and relationships
  3. Identify anomalies and investigate causes
  4. Test assumptions about data structure
  5. Look for confounding variables
  6. Validate findings with different subsets

Decision analysis:

  1. Define decision criteria explicitly
  2. Identify available evidence and gaps
  3. Assess quality of evidence
  4. Consider base rates and prior probabilities
  5. Evaluate alternatives systematically
  6. Acknowledge uncertainty in conclusions

Human Behavior and Social Systems

Behavioral analysis:

  1. Observe patterns without initial judgment
  2. Consider multiple motivations and contexts
  3. Look for environmental and systemic factors
  4. Test explanations against diverse cases
  5. Recognize individual vs. structural causes
  6. Avoid overgeneralization from limited examples

Organizational and system dynamics:

  1. Map relationships and feedback loops
  2. Identify incentive structures
  3. Look for emergent vs. designed properties
  4. Consider time delays and lag effects
  5. Test interventions on small scale first
  6. Monitor for unintended consequences

Creative and Artistic Domains

Technique development:

  1. Study exemplars and deconstruct methods
  2. Vary parameters systematically
  3. Document what works under which conditions
  4. Build personal knowledge base of approaches
  5. Test techniques across different contexts
  6. Reflect on why certain approaches succeed

Aesthetic evaluation:

  1. Separate personal preference from effectiveness
  2. Gather diverse perspectives and feedback
  3. Identify principles underlying reactions
  4. Test variations to understand key factors
  5. Consider cultural and contextual influences
  6. Build evidence-based practice

Knowledge Acquisition and Learning

Learning new domains:

  1. Start with fundamental concepts and principles
  2. Build mental models through active testing
  3. Seek diverse sources and perspectives
  4. Test understanding through application
  5. Identify and fill knowledge gaps
  6. Connect new knowledge to existing frameworks

Skill development:

  1. Break complex skills into components
  2. Practice with immediate feedback
  3. Vary conditions to build adaptability
  4. Track progress with concrete measures
  5. Analyze errors for systematic patterns
  6. Adjust approach based on evidence

Critical Thinking Tools

Recognizing Cognitive Biases

Confirmation bias:

  • Are you only looking for supporting evidence?
  • Have you seriously considered alternatives?
  • Would you notice if you were wrong?

Availability bias:

  • Are recent or memorable examples distorting judgment?
  • Are you overweighting vivid anecdotes vs. data?

Anchoring:

  • Is your first estimate unduly influencing subsequent thinking?
  • Have you independently verified initial assumptions?

Dunning-Kruger effect:

  • Are you overconfident in areas of limited expertise?
  • Have you consulted those with deeper domain knowledge?

Evaluating Evidence Quality

Source reliability:

  • What's the source's expertise and track record?
  • Are there conflicts of interest?
  • Is methodology transparent and sound?
  • Can findings be independently verified?

Sample quality:

  • Is the sample representative?
  • Is sample size adequate for conclusions?
  • What selection biases might exist?

Measurement validity:

  • Are you measuring what you think you're measuring?
  • Are measurements consistent and repeatable?
  • How precise are the measurements?
  • What's the margin of error?

Logical Reasoning

Deductive reasoning:

  • If premises are true, conclusion must follow
  • Check for logical validity
  • Verify premises are actually true

Inductive reasoning:

  • Generalizing from specific observations
  • How broad is the evidence base?
  • Are there counterexamples?
  • How confident can you be in the generalization?

Abductive reasoning:

  • Inference to best explanation
  • What other explanations exist?
  • Is simplest explanation best?
  • What would distinguish between explanations?

Red Flags and Warning Signs

Watch for:

  • Overconfident claims without proportional evidence
  • Cherry-picked data or selective reporting
  • Lack of falsifiability or immunity to evidence
  • Ad hoc explanations for contradictory findings
  • Extraordinary claims without extraordinary evidence
  • Conflation of correlation with causation
  • Hidden assumptions or undefined terms
  • Arguments from authority rather than evidence
  • Emotional reasoning or appeals to fear/hope
  • False dichotomies or oversimplification

Best Practices

Start with Curiosity, Not Answers

Approach every investigation with genuine openness to being wrong. The goal is understanding, not validation.

Make Thinking Explicit

Write down your reasoning process. This helps identify gaps, assumptions, and potential errors that mental reasoning might miss.

Embrace Productive Failure

Failed hypotheses are valuable - they constrain the solution space and guide better questions. Document what doesn't work.

Seek Disconfirming Evidence

Actively look for reasons your hypothesis might be wrong. This is more valuable than collecting supporting evidence.

Calibrate Confidence

Match confidence to evidence quality. "I'm 60% confident based on X, Y, Z" is more useful than false certainty.

Iterate and Refine

First attempts are rarely optimal. Use each cycle to refine questions, methods, and understanding.

Collaborate and Review

Other perspectives catch blind spots. Present findings to others and genuinely consider criticisms.

Document for Future Learning

Your future self (and others) will benefit from understanding not just what you found, but how and why you investigated as you did.

When to Use This Framework

This scientific approach is valuable when:

  • Diagnosing problems or troubleshooting systems
  • Making important decisions with uncertain information
  • Learning new domains or developing expertise
  • Evaluating claims or assessing evidence
  • Designing solutions to complex problems
  • Investigating unexpected behaviors or outcomes
  • Challenging existing practices or assumptions
  • Building knowledge in unfamiliar territory
  • Reducing risk in high-stakes situations
  • Pursuing truth over comfort or convenience

Integration with Other Skills

Scientific thinking enhances other skills:

  • Technical work: More reliable debugging, optimization, and design
  • Communication: Evidence-based arguments, clearer reasoning
  • Project management: Data-driven decisions, risk assessment
  • Learning: Faster skill acquisition, deeper understanding
  • Problem-solving: Systematic exploration, better solutions
  • Creativity: Informed experimentation, reflective practice

Limitations and Boundaries

When scientific thinking may not apply:

  • Immediate emergencies requiring instant action
  • Purely subjective aesthetic judgments
  • Ethical first principles and values
  • Situations where experimentation is unethical
  • Decisions requiring value judgments over facts

Recognize that:

  • Not everything can or should be measured
  • Some questions have no single right answer
  • Context and judgment matter alongside evidence
  • Uncertainty is often irreducible
  • Perfect information is impossible

The scientific mindset is about rigor, honesty, and systematic inquiry - bringing these qualities to whatever domain you're working in.