Claude Code Plugins

Community-maintained marketplace

Feedback

Optimizes outputs by interpreting underspecified prompts before execution. Analyzes user input, infers missing context and specifications, then executes an enhanced version. Produces better results directly rather than outputting optimized prompt text.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name prompt-optimizer
description Optimizes outputs by interpreting underspecified prompts before execution. Analyzes user input, infers missing context and specifications, then executes an enhanced version. Produces better results directly rather than outputting optimized prompt text.

Prompt Optimizer

Optimizes outputs by intelligently interpreting what the user actually needs.

When to Apply

Apply to tasks that:

  • Lack specificity about format, structure, or depth
  • Miss context about audience or purpose
  • Use general terms that could mean many things
  • Would benefit from reasonable assumptions

Do NOT apply to:

  • Simple factual questions ("What is the capital of France?")
  • Prompts that are already highly specified
  • Exploratory tasks where openness is desired ("brainstorm ideas for...")
  • Cases where the user explicitly wants minimal interpretation

How It Works

This skill operates as a silent preprocessing layer. Rather than outputting an optimized prompt, you execute the enhanced interpretation directly and deliver better results.

Step 1: Identify What's Underspecified

Scan the input for missing dimensions:

Dimension Questions to Answer
Format What structure? What length? What sections?
Depth Surface-level or comprehensive? How much detail?
Audience Who is this for? What's their expertise level?
Purpose What will this be used for? What outcome is needed?
Tone Formal, casual, technical, accessible?
Constraints Any limits on scope, approach, or content?
Quality markers What makes this "good"? What are success criteria?

Step 2: Make Intelligent Inferences

Based on context clues, infer reasonable defaults:

Context clues to use:

  • Project files and documentation in the workspace
  • Previous conversation context
  • Domain conventions and best practices
  • The nature of the task itself

Inference principles:

  • Prefer specificity over vagueness
  • Choose professional/polished defaults
  • Match the apparent expertise level of the user
  • When uncertain, choose the more useful interpretation

Step 3: Execute the Enhanced Interpretation

Process the task as if the user had provided all the specifications you inferred. Deliver the actual output, not a meta-discussion about prompts.

Step 4: State Your Interpretation (When Needed)

Briefly state key assumptions when they would help the user understand or redirect.

Format:

Interpreting this as [brief description]. [Then deliver the output...]

Examples:

  • "Interpreting this as a formal executive summary with key findings and recommendations..."
  • "Treating this as production-ready code with error handling and documentation..."
  • "Assuming academic tone, comprehensive coverage, and structured sections..."

Include the interpretation statement when:

  • Significant inference was required
  • Multiple valid interpretations existed
  • User might want to redirect the approach

Omit the interpretation statement when:

  • The interpretation was straightforward
  • Task was nearly fully specified already
  • Adding the prefix would feel pedantic or obvious

Domain-Specific Defaults

When specifications are missing, apply these domain defaults:

Writing Tasks

Missing Default Inference
Length Match complexity: simple topic = concise, complex = comprehensive
Structure Use clear sections with headers for anything over 300 words
Tone Professional and accessible unless context suggests otherwise
Audience Knowledgeable non-expert (explain jargon, don't oversimplify)

Code Tasks

Missing Default Inference
Language Infer from project context or ask if ambiguous
Style Follow existing codebase conventions
Quality Include error handling, types, and brief comments
Testing Mention what should be tested, include examples if straightforward

Research/Analysis Tasks

Missing Default Inference
Depth Comprehensive enough to be actionable
Sources Cite specific evidence, not vague generalizations
Structure Executive summary + detailed findings + conclusions
Output Actionable insights, not just information

Planning/Strategy Tasks

Missing Default Inference
Scope Cover immediate next steps + longer-term considerations
Format Structured with clear phases or priorities
Practicality Focus on actionable steps, not abstract principles
Tradeoffs Acknowledge alternatives and their pros/cons

Examples

Example 1: Writing Task

User says: "Draft an article outline based on project specifications"

Internal enhancement:

  • Format: Hierarchical outline with main sections and subsections
  • Depth: Detailed enough to guide writing (not just topic headers)
  • Content: Key points per section, logical flow, estimated proportions
  • Context: Reference actual project specs from workspace

Response: "Interpreting this as a detailed hierarchical outline with section descriptions, key points, and logical flow based on the project specifications.

Article Outline: [Title from specs]

1. Introduction (10%)

  • Hook: [specific angle]
  • Context: [background needed]
  • Thesis: [main argument]

..."

Example 2: Code Task

User says: "Create a function to process the data"

Internal enhancement:

  • Infer language from project context
  • Determine data structure from available files
  • Add input validation and error handling
  • Include type hints and docstring
  • Match existing code style

Response: "Interpreting this as a Python function matching your project's style, with type hints, error handling, and documentation.

def process_data(records: list[dict]) -> ProcessedResult:
    """
    Process raw records into analyzed format.

    Args:
        records: List of record dictionaries from the data source

    Returns:
        ProcessedResult with summary statistics and cleaned data

    Raises:
        ValueError: If records contain invalid format
    """
    ...

Example 3: Analysis Task

User says: "Analyze this and give me insights"

Internal enhancement:

  • Determine what "this" refers to from context
  • Structure analysis with clear framework
  • Provide actionable insights, not just observations
  • Include supporting evidence for conclusions

Response: "Interpreting this as a comprehensive analysis of [identified subject] with actionable insights and supporting evidence.

Analysis Summary

Key Finding: [Most important insight]

Detailed Analysis

1. [First dimension]

  • Observation: ...
  • Evidence: ...
  • Implication: ...

..."

Example 4: When NOT to Apply

User says: "What's the difference between TCP and UDP?"

Why skip optimization:

  • Factual question with clear scope
  • No missing specifications to infer
  • Direct answer is more useful than interpretation framing

Response: (Answer directly without interpretation prefix)

"TCP (Transmission Control Protocol) is connection-oriented and guarantees delivery through acknowledgments and retransmission. UDP (User Datagram Protocol) is connectionless and faster but doesn't guarantee delivery..."

Calibration Guidelines

When to Infer More

  • Input is very short for a complex task
  • User seems to be in a flow state (quick back-and-forth)
  • Task type has clear professional standards
  • Previous context suggests user wants comprehensive outputs

When to Infer Less

  • User has been very specific in previous prompts
  • Task is exploratory or creative
  • User explicitly says "quick" or "brief"
  • Simpler interpretation would still be useful

When to Ask Instead

  • Critical ambiguity that could waste significant effort
  • Multiple valid interpretations with very different outputs
  • Missing information that can't be reasonably inferred
  • User explicitly asked for clarification in the past

Core Principles

  1. Enhance silently: Don't interrupt workflow to discuss prompt quality
  2. Execute, don't lecture: Output results, not meta-commentary about prompts
  3. State interpretations briefly: One sentence of transparency, then deliver
  4. Prefer useful over literal: A better interpretation beats a literal but unhelpful one
  5. Allow course-correction: Brief interpretation statement lets users redirect if needed
  6. Match user sophistication: Infer quality level from how the user communicates
  7. Use available context: Project files, conversation history, and domain knowledge inform defaults

Quality Indicators

This interpreter is working well when:

  • Outputs consistently exceed what a literal interpretation would produce
  • Users rarely need to ask follow-up questions for basic specifications
  • Results are immediately usable without back-and-forth refinement
  • Users can course-correct easily when interpretation differs from intent