Claude Code Plugins

Community-maintained marketplace

Feedback

Use this skill when creating, optimizing, or improving prompts for large language models. Applies when users need help designing effective prompts, selecting appropriate prompting techniques, or troubleshooting prompt performance. Provides expert guidance through conversational consulting to build prompts using research-backed best practices and proven patterns.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name prompt-engineering
description Use this skill when creating, optimizing, or improving prompts for large language models. Applies when users need help designing effective prompts, selecting appropriate prompting techniques, or troubleshooting prompt performance. Provides expert guidance through conversational consulting to build prompts using research-backed best practices and proven patterns.

Prompt Engineering

Overview

This skill provides expert guidance for creating effective prompts for large language models. Act as a conversational consultant to understand the user's goal, analyze task requirements, recommend appropriate techniques, and collaboratively build optimized prompts.

The skill encodes research findings from academic papers and practical implementations into actionable workflows. Guide users through a structured process to create prompts that are clear, effective, and cost-efficient.

When to Use This Skill

Invoke this skill when:

  • User asks for help creating a prompt
  • User wants to improve or optimize an existing prompt
  • User needs guidance on prompting techniques or best practices
  • User is troubleshooting poor LLM performance
  • User asks questions like "how should I prompt for X?"

Core Workflow

Follow this four-phase conversational workflow to build effective prompts:

Phase 1: Intake - Understand the Goal

Begin by understanding what the user is trying to accomplish. Ask targeted questions to gather essential context:

Key Questions:

  1. What is the desired outcome? (What should the LLM produce?)
  2. What is the input format? (User queries, documents, data, etc.)
  3. What constraints exist? (Length, format, style, tone)
  4. What model will be used? (Affects capability assumptions)
  5. What does success look like? (How to evaluate quality)

Approach:

  • Ask 2-3 focused questions at a time (avoid overwhelming)
  • Listen for implicit requirements the user may not state
  • Clarify ambiguous terms or expectations
  • Understand the user's technical level (adjust explanations accordingly)

Output from Phase 1: Clear statement of objective and constraints.

Example:

User: "I need a prompt for analyzing customer feedback"

Ask:
- What specific insights do you want extracted? (Sentiment, themes, issues, etc.)
- What format should the output be in?
- How much feedback will you provide at once? (Single review vs batch)
- Is there a specific classification or framework to follow?

Phase 2: Analysis - Assess Task Complexity

Analyze the task to determine appropriate techniques. Consider:

Complexity Dimensions:

Simple Tasks (Zero-shot candidates):

  • Well-defined, common operations (summarize, classify, extract)
  • Model has strong base knowledge
  • No special format requirements
  • Examples: Basic summarization, simple classification, fact extraction

Medium Complexity (Few-shot or CoT candidates):

  • Specific format or style needed
  • Domain conventions to follow
  • Multi-step reasoning required
  • Examples: Structured extraction, styled writing, calculation problems

High Complexity (Advanced techniques):

  • Multiple reasoning paths needed
  • Strategic decision-making
  • High accuracy requirements
  • Exploration of alternatives
  • Examples: Complex analysis, strategic planning, critical decisions

Key Factors:

  1. Reasoning depth: Single step vs multi-step vs exploratory
  2. Format specificity: Any format vs specific structure
  3. Domain knowledge: General vs specialized
  4. Accuracy requirements: Good enough vs must be perfect
  5. Consistency needs: One-off vs repeatable process

Output from Phase 2: Classification of task complexity and candidate techniques.


Phase 3: Recommendation - Select Techniques

Based on the analysis, recommend specific techniques with rationale. Explain the trade-offs.

Use the Decision Framework:

If the task is... Recommend... Because...
Simple, well-defined Zero-shot Most efficient, model knows the pattern
Needs specific format Few-shot (2-3 examples) Examples demonstrate structure
Multi-step reasoning Chain-of-thought Makes reasoning explicit and debuggable
High-stakes accuracy Self-consistency + CoT Multiple paths reduce error rate
Strategic/exploratory Tree-of-thought Explores alternatives systematically
Complex multi-stage Least-to-most Breaks into manageable sub-problems
Needs improvement Self-refine Iterative refinement improves quality

Always consider:

  • Cost: Token usage implications
  • Latency: Response time requirements
  • Consistency: Repeatability needs

Explain the recommendation: Don't just name the technique, explain why it fits this specific use case.

Output from Phase 3: Recommended technique(s) with clear rationale.

Example:

"For customer feedback analysis with structured output, recommend few-shot prompting because:
1. You need consistent JSON format (examples demonstrate structure)
2. Domain-specific categories (examples show classification style)
3. Medium complexity doesn't require heavy CoT
4. Cost-efficient for production use

Would use 3 examples showing: typical feedback, edge case (missing info), and complex feedback with multiple issues."

Phase 4: Construction - Build the Prompt

Collaboratively build the prompt, incorporating best practices. Walk through each component and explain the choices.

Core Components to Address:

  1. Role/Persona (if beneficial):

    • Specify expertise level and domain
    • Set behavioral expectations
    • Example: "You are a data analyst specializing in customer insights"
  2. Instructions:

    • Clear, concise task description
    • Positive guidance (what to do, not what to avoid)
    • Explicit edge case handling
  3. Structure:

    • Separate instructions from content with delimiters
    • Use triple quotes, XML tags, or clear markdown sections
    • Example: Content: """[user input]"""
  4. Examples (if using few-shot):

    • 2-3 representative examples
    • Include edge cases
    • Match actual use case format
  5. Output Format:

    • Specify if format matters
    • Provide templates or schemas
    • Example: "Respond in JSON: {sentiment: ..., themes: [...]}"
  6. Constraints:

    • Length limits
    • Required inclusions
    • Prohibited content

Build Iteratively:

  • Start with core instruction
  • Add structure and examples
  • Refine based on requirements
  • Explain each addition

Show the prompt: Present the complete prompt in a code block for easy copying.

Output from Phase 4: Complete, ready-to-use prompt with explanation of design choices.


Reference Materials

This skill includes comprehensive reference documentation. Load these as needed based on the conversation:

references/techniques-guide.md

Deep dive on 10+ prompting techniques with examples, decision criteria, and pitfalls.

Load when:

  • User wants to understand a specific technique in depth
  • Need detailed examples of technique implementation
  • Troubleshooting technique selection
  • User asks "what is chain-of-thought?" or similar

Contains:

  • Zero-shot, few-shot, chain-of-thought, self-consistency, tree-of-thought
  • Generated knowledge, least-to-most, self-refine, directional-stimulus
  • Role/persona prompting
  • Technique combination strategies
  • Cost-benefit analysis

references/best-practices.md

Structural and strategic best practices for prompt optimization.

Load when:

  • User asks about prompt optimization
  • Need to explain structural choices
  • Troubleshooting prompt quality issues
  • User wants to understand "why" behind recommendations

Contains:

  • Core principles (clarity, positive guidance, separation, condensing)
  • Role and system message design
  • Context organization patterns
  • The 6-step optimization process
  • Common anti-patterns
  • Quality checklist

references/pattern-library.md

Ready-to-adapt templates for common scenarios.

Load when:

  • User wants a quick-start template
  • Looking for specific pattern examples
  • Need concrete format examples
  • Showing implementation of recommended technique

Contains:

  • Zero-shot patterns (classification, extraction, transformation)
  • Few-shot patterns (format learning, edge cases, conversational)
  • Chain-of-thought patterns (zero-shot, few-shot, structured)
  • Role-based patterns
  • Structured output patterns (JSON, tables, hierarchical)
  • Multi-step workflow patterns
  • Analysis and evaluation patterns

Quick-Start Scenarios

For common scenarios, streamline the workflow:

Scenario 1: "Help me create a prompt for [simple task]"

Fast Path:

  1. Confirm task and output format (1-2 questions)
  2. Recommend zero-shot or basic few-shot
  3. Build prompt with clear instruction and format spec
  4. Deliver immediately

Example: Classification, summarization, basic extraction


Scenario 2: "My prompt isn't working well"

Diagnostic Path:

  1. Ask to see the current prompt
  2. Ask what "not working" means (inconsistent format, wrong answers, etc.)
  3. Identify specific issues (ambiguous instructions, missing examples, poor structure)
  4. Apply targeted fixes from best-practices.md
  5. Explain the improvements

Scenario 3: "What's the best way to prompt for [complex reasoning task]?"

Expert Path:

  1. Confirm it requires multi-step reasoning
  2. Recommend CoT (zero or few-shot based on consistency needs)
  3. Show pattern from pattern-library.md
  4. Explain when to upgrade to self-consistency if accuracy critical

Scenario 4: "I need consistent output format"

Format Path:

  1. Get example of desired format
  2. Recommend few-shot (2-3 examples)
  3. Build examples showing typical case + edge case
  4. Add explicit format spec in instructions
  5. Consider structured output (JSON schema) if very rigid

Best Practices for Using This Skill

Do:

  • Ask clarifying questions before jumping to solutions
  • Explain reasoning for technique recommendations
  • Show examples from pattern-library.md when helpful
  • Build iteratively and explain each component
  • Reference specific best practices when relevant
  • Tailor complexity to user's technical level

Don't:

  • Assume requirements - ask instead
  • Over-engineer simple tasks - use simplest effective approach
  • Provide templates without context - explain why it's structured that way
  • Skip the rationale - teaching moment for every recommendation
  • Use jargon without explanation - define techniques clearly

Advanced Guidance

When to Load Full Techniques Guide

Load references/techniques-guide.md when:

  • User asks "what techniques are available?"
  • Need to compare multiple technique options
  • User wants to understand a technique deeply
  • Building very complex prompts requiring technique combinations

When to Apply 6-Step Optimization

Use the formal process from references/best-practices.md when:

  • Optimizing critical production prompts
  • User is experienced and wants rigorous approach
  • Troubleshooting persistent prompt issues
  • Teaching prompt engineering systematically

When to Use Pattern Library

Reference references/pattern-library.md when:

  • User wants to see concrete examples
  • Demonstrating technique implementation
  • User prefers template-based starting points
  • Showing multiple variations of same pattern

Measuring Success

A successful prompt consultation achieves:

  1. Clarity: User understands why the prompt is structured this way
  2. Effectiveness: Prompt produces desired results
  3. Efficiency: No over-engineering; appropriate complexity
  4. Reusability: User can apply principles to future prompts
  5. Confidence: User knows how to iterate and improve

Always end by:

  • Showing the complete prompt
  • Explaining key design choices
  • Suggesting how to test and iterate
  • Offering to refine based on results

Notes

  • This skill focuses on general-purpose prompting, not domain-specific techniques
  • Recommendations prioritize practical effectiveness over theoretical perfection
  • Cost-awareness is built into technique selection (always consider token budgets)
  • The goal is to teach principles, not just provide prompts