| name | skill-builder |
| description | Expert guidance for creating, writing, building, and refining Claude Code Skills. Use when working with SKILL.md files, authoring new skills, improving existing skills, or understanding skill structure, progressive disclosure, workflows, validation patterns, and XML formatting. |
| allowed-tools | Read, Write, Glob, Bash, AskUserQuestion |
Skills are organized prompts that get loaded on-demand. All prompting best practices apply, with an emphasis on pure XML structure for consistent parsing and efficient token usage.
See references/skill-structure.md for complete details.
import pdfplumber
with pdfplumber.open("file.pdf") as pdf:
text = pdf.pages[0].extract_text()
<objective>- What the skill does and why it matters<quick_start>- Immediate, actionable guidance<success_criteria>or<when_successful>- How to know it worked
<context>- Background/situational information<workflow>or<process>- Step-by-step procedures<advanced_features>- Deep-dive topics (progressive disclosure)<validation>- How to verify outputs<examples>- Multi-shot learning<anti_patterns>- Common mistakes to avoid<security_checklist>- Non-negotiable security patterns<testing>- Testing workflows<common_patterns>- Code examples and recipes<reference_guides>or<detailed_references>- Links to reference files
Example: Text extraction, file format conversion, simple API calls
Example: Document processing with multiple steps, API integration with configuration
Example: Payment processing, authentication systems, multi-step workflows with validation
IF no context provided (user just invoked the skill without describing what to build): → IMMEDIATELY use AskUserQuestion with these exact options:
- Create a new skill - Build a skill from scratch
- Update an existing skill - Modify or improve an existing skill
- Get guidance on skill design - Help me think through what kind of skill I need
DO NOT ask "what would you like to build?" in plain text. USE the AskUserQuestion tool.
Routing after selection:
- "Create new" → proceed to adaptive intake below
- "Update existing" → enumerate existing skills as numbered list (see below), then gather requirements for changes
- "Guidance" → help user clarify needs before building
List all skills in chat as numbered list (DO NOT use AskUserQuestion - there may be many skills):
- Glob for
~/skills/*/SKILL.md - Present as numbered list in chat:
Available skills: 1. create-agent-skills 2. generate-natal-chart 3. manage-stripe ... - Ask: "Which skill would you like to update? (enter number)"
- Glob for
After user enters number, read that skill's SKILL.md
Ask what they want to change/improve using AskUserQuestion or direct question
Proceed with modifications
IF context was provided (user said "build a skill for X"): → Skip this gate. Proceed directly to adaptive intake.
Do NOT ask about things that are obvious from context.
Question generation guidance:
- Scope questions: "What specific operations?" not "What should it do?"
- Complexity questions: "Should this handle [specific edge case]?" based on domain knowledge
- Output questions: "What should the user see/get when successful?"
- Boundary questions: "Should this also handle [related thing] or stay focused?"
Avoid:
- Questions answerable from the initial description
- Generic questions that apply to all skills
- Yes/no questions when options would be more helpful
- Obvious questions like "what should it be called?" when the name is clear
Each question option should include a description explaining the implications of that choice.
Question: "Ready to proceed with building, or would you like me to ask more questions?"
Options:
- Proceed to building - I have enough context to build the skill
- Ask more questions - There are more details to clarify
- Let me add details - I want to provide additional context
If "Ask more questions" selected → loop back to generate_questions with refined focus If "Let me add details" → receive additional context, then re-evaluate If "Proceed" → continue to research_trigger, then step_1
External APIs or web services:
- Keywords: "API", "endpoint", "REST", "GraphQL", "webhook", "HTTP"
- Service names: "Stripe", "AWS", "Firebase", "OpenAI", "Anthropic", etc.
Standard vocabularies and ontologies:
- Keywords: "schema.org", "vocabulary", "ontology", "JSON-LD", "RDF", "microdata"
- Examples: Schema.org types, Dublin Core, FOAF
Protocol specifications:
- Keywords: "specification", "standard", "protocol"
- Examples: "HTTP", "WebSocket", "MQTT", "OAuth", "SAML"
Third-party libraries or frameworks:
- Keywords: "library", "package", "framework", "npm", "pip", "cargo"
- Examples: "React", "Django", "pandas", "TensorFlow"
Data format standards:
- Keywords: "format", "parser", "serialization"
- Examples: "CSV", "Parquet", "Protocol Buffers", "Avro", "XML"
Detection method: Check user's initial description and all collected requirements for these keywords or patterns.
"This skill involves [detected technology/standard]. Would you like me to research current [technology] documentation and patterns before building?"
Options:
- Yes, research first - Fetch 2024-2025 documentation for accurate, up-to-date implementation
- No, proceed with general patterns - Use knowledge cutoff data (January 2025)
- I'll provide the documentation - User will supply relevant documentation or links
If option 1 selected:
- For web APIs: Use WebSearch for "[technology] API documentation 2024 2025"
- For libraries: Use Context7 MCP if available, otherwise WebSearch
- For standards/vocabularies: Use WebSearch for "[standard] specification latest"
- Focus on: current versions, recent changes, migration guides, common patterns
- Summarize findings in internal notes for use in skill generation
If option 3 selected:
- Wait for user to provide documentation
- Read provided links or files
- Summarize key information for skill generation
Note findings in skill generation process:
Research findings for [technology]:
- Current version: [version]
- Key changes from knowledge cutoff: [summary]
- Recommended patterns: [list]
- Documentation links: [urls]
Primary indicators (strong project context):
CLAUDE.mdfile exists.claude/directory exists- Git repository root (
.git/directory)
Secondary indicators (language-specific projects):
package.json(Node.js)pyproject.tomlorsetup.py(Python)Cargo.toml(Rust)pom.xmlorbuild.gradle(Java)go.mod(Go)
Path determination:
# Use Bash to check project context: if [ -f "CLAUDE.md" ] || [ -d ".claude" ]; then PROJECT_CONTEXT=true PROJECT_ROOT=$(pwd) elif git rev-parse --git-dir > /dev/null 2>&1; then PROJECT_CONTEXT=true PROJECT_ROOT=$(git rev-parse --show-toplevel) else PROJECT_CONTEXT=false fiSet paths based on context:
- If
PROJECT_CONTEXT=true:SKILLS_PATH="$PROJECT_ROOT/skills/"COMMANDS_PATH="$PROJECT_ROOT/commands/"
- If
PROJECT_CONTEXT=false:SKILLS_PATH="$HOME/skills/"COMMANDS_PATH="$HOME/commands/"
- If
"Detected project context at: [project_root]
Where should this skill be created?"
Options:
- Project-specific (skills/) - Tracked in git, shared with team, specific to this codebase
- Global (~/skills/) - Personal use across all projects, general-purpose skill
Provide recommendation based on skill purpose:
- If skill mentions project-specific terms (file paths, module names, codebase concepts) → Recommend project-specific
- If skill is general-purpose (works with any codebase) → Recommend global
- If user explicitly mentioned "for this project" → Recommend project-specific
- Default recommendation: Project-specific (safer, can be moved to global later)
- Step 0.5: Create directory at
$SKILLS_PATH/[skill-name]/ - Step 3-5: Write SKILL.md to
$SKILLS_PATH/[skill-name]/SKILL.md - Step 5: Create references at
$SKILLS_PATH/[skill-name]/references/ - Step 8: Create slash command at
$COMMANDS_PATH/[skill-name].md - Step 8.5 (if applicable): Create README at
$SKILLS_PATH/[skill-name]/README.md
All subsequent file operations MUST use these paths, not hardcoded paths.
1. File Structure Checks:
# Check SKILL.md exists
[ -f "$SKILLS_PATH/$SKILL_NAME/SKILL.md" ] && echo "✅ SKILL.md exists" || echo "❌ SKILL.md missing"
# Check directory name matches YAML name
YAML_NAME=$(head -20 "$SKILLS_PATH/$SKILL_NAME/SKILL.md" | grep "^name:" | cut -d: -f2 | tr -d ' ')
[ "$YAML_NAME" = "$SKILL_NAME" ] && echo "✅ Name matches directory" || echo "⚠️ Name mismatch"
2. YAML Frontmatter Validation:
# Extract and validate YAML
head -20 "$SKILLS_PATH/$SKILL_NAME/SKILL.md" | python3 -c "
import yaml, sys
try:
doc = yaml.safe_load(sys.stdin)
print('✅ YAML syntax valid')
# Check required fields
if 'name' in doc:
print(f'✅ Name field present: {doc[\"name\"]}')
if len(doc['name']) <= 64:
print(f'✅ Name length OK: {len(doc[\"name\"])} chars')
else:
print(f'⚠️ Name too long: {len(doc[\"name\"])} chars (max 64)')
else:
print('❌ Name field missing')
if 'description' in doc:
print(f'✅ Description field present: {len(doc[\"description\"])} chars')
if len(doc['description']) <= 1024:
print(f'✅ Description length OK')
else:
print(f'⚠️ Description too long: {len(doc[\"description\"])} chars (max 1024)')
else:
print('❌ Description field missing')
except yaml.YAMLError as e:
print(f'❌ YAML parse error: {e}')
"
3. Line Count Check:
LINE_COUNT=$(wc -l < "$SKILLS_PATH/$SKILL_NAME/SKILL.md")
echo "SKILL.md line count: $LINE_COUNT"
if [ $LINE_COUNT -lt 500 ]; then
echo "✅ Line count under 500 limit ($LINE_COUNT lines, $(( (500 - LINE_COUNT) )) remaining)"
else
echo "⚠️ Line count exceeds 500 limit ($LINE_COUNT lines, $(( (LINE_COUNT - 500) )) over)"
fi
4. XML Structure Checks:
# Check for markdown headings
HEADING_COUNT=$(grep -c '^#' "$SKILLS_PATH/$SKILL_NAME/SKILL.md" || echo 0)
if [ $HEADING_COUNT -eq 0 ]; then
echo "✅ No markdown headings found"
else
echo "⚠️ Found $HEADING_COUNT markdown headings - should use XML tags instead"
grep -n '^#' "$SKILLS_PATH/$SKILL_NAME/SKILL.md"
fi
# Check required tags
for TAG in "objective" "quick_start" "success_criteria"; do
if grep -q "<$TAG>" "$SKILLS_PATH/$SKILL_NAME/SKILL.md"; then
echo "✅ Required tag <$TAG> present"
else
if [ "$TAG" = "success_criteria" ]; then
if grep -q "<when_successful>" "$SKILLS_PATH/$SKILL_NAME/SKILL.md"; then
echo "✅ Alternative tag <when_successful> present"
else
echo "❌ Missing required tag: <$TAG> or <when_successful>"
fi
else
echo "❌ Missing required tag: <$TAG>"
fi
fi
done
# List all XML tags found
echo "XML tags found:"
grep -oE '<[a-z_]+>' "$SKILLS_PATH/$SKILL_NAME/SKILL.md" | sort | uniq -c | sort -rn
5. Progressive Disclosure Check:
if [ $LINE_COUNT -gt 300 ]; then
if [ -d "$SKILLS_PATH/$SKILL_NAME/references" ]; then
REF_COUNT=$(find "$SKILLS_PATH/$SKILL_NAME/references" -name "*.md" | wc -l)
echo "✅ Progressive disclosure: $REF_COUNT reference files created"
else
echo "⚠️ SKILL.md is $LINE_COUNT lines but no reference files found - consider splitting"
fi
fi
6. Reference Link Validation:
# Check all reference links exist
grep -oE '\[.*\]\([^)]+\.md\)' "$SKILLS_PATH/$SKILL_NAME/SKILL.md" | while read link; do
FILE=$(echo "$link" | sed 's/.*(\(.*\))/\1/')
if [ -f "$SKILLS_PATH/$SKILL_NAME/$FILE" ]; then
echo "✅ Reference link valid: $FILE"
else
echo "⚠️ Broken reference link: $FILE"
fi
done
7. Naming Convention Check:
# Check verb-noun pattern
SKILL_FIRST_WORD=$(echo "$SKILL_NAME" | cut -d- -f1)
COMMON_VERBS="create manage setup generate analyze process coordinate build handle configure deploy execute extract transform validate parse render compile"
if echo "$COMMON_VERBS" | grep -qw "$SKILL_FIRST_WORD"; then
echo "✅ Naming convention: '$SKILL_FIRST_WORD' follows verb-noun pattern"
else
echo "⚠️ Naming: '$SKILL_FIRST_WORD' is not a common verb - verify it's action-oriented"
fi
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Skill Validation Report: [skill-name]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
File Structure: [✅/❌] [details]
YAML Frontmatter: [✅/❌] [details]
Line Count: [✅/⚠️] [X lines / 500 limit]
Required Tags: [✅/❌] [3/3 present]
No MD Headings: [✅/⚠️] [X found]
XML Structure: [✅/❌] [tags properly nested]
Progressive Disc: [✅/⚠️] [X ref files]
Reference Links: [✅/⚠️] [all valid / X broken]
Naming Convention: [✅/⚠️] [verb-noun pattern]
Slash Command: [✅/❌] [exists at correct path]
Overall Status: [PASS ✅ / NEEDS FIXES ⚠️ / FAIL ❌]
[If issues found, list them with suggested fixes]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
- Stop immediately - Do not proceed to Step 7
- Report failures clearly with specific line numbers or details
- Attempt automatic fixes where possible:
- Remove markdown headings → convert to XML tags
- Fix YAML syntax errors if obvious
- Rename files to match conventions
- If cannot auto-fix, ask user:
Validation failed. How should I proceed? Options: 1. Let me try to fix automatically 2. Show me the issues and I'll fix manually 3. Proceed anyway (not recommended) - Re-run validation after fixes applied
- Only proceed to Step 7 when all critical checks pass
Warning-level issues (⚠️) can proceed but should be noted:
- Line count 400-500: "Approaching limit, future edits may need reference split"
- Uncommon verb in name: "Ensure name is clear and action-oriented"
- No reference files when >300 lines: "Consider splitting for better progressive disclosure"
"Run full validation checks or quick validation?"
Quick validation only checks:
- YAML frontmatter valid
- Required tags present
- No markdown headings
- Line count < 500
Full validation runs all 7 check categories above.
"The [skill-name] skill has been created and validated. How would you like to proceed?"
Options:
- Test it now - I'll provide a sample scenario to test the skill guidance
- Skip testing - Test it later during actual usage
- You provide test scenario - I'll use your scenario to test the skill
Recommendation: Option 1 (testing now catches issues before delivery)
For different skill types:
API integration skills (e.g., manage-stripe):
Scenario: You need to create a new subscription for a customer. Let's invoke the skill and see what guidance it provides.Code processing skills (e.g., coordinate-subagents):
Scenario: You need to find all files handling user authentication. Let's use the skill to craft an efficient subagent prompt.Data transformation skills (e.g., process-csv):
Scenario: You have a CSV file with 10,000 rows and need to filter and transform specific columns. Let's test the skill's guidance.Setup/configuration skills (e.g., setup-testing):
Scenario: You need to add testing to a new TypeScript project. Let's see what the skill recommends.
Generate scenario that exercises the skill's primary workflow.
Invoke the skill using the Skill tool:
Skill: [skill-name] Context: [generated scenario]Observe the response:
- Is the guidance clear and actionable?
- Are the steps in logical order?
- Are examples helpful and realistic?
- Is any critical information missing?
- Are there any confusing or ambiguous instructions?
Document observations:
Test Observations: ✅ Clear: [what worked well] ⚠️ Unclear: [what was confusing] ❌ Missing: [what information was needed but not provided] 💡 Suggestions: [potential improvements]
"Based on the test, the skill guidance seems [assessment]. Would you like to iterate?"
Options:
- Yes, make improvements - I'll update based on testing observations
- No, it's good enough - Proceed to Step 8 (slash command creation)
- Show me the issues first - Let me review before deciding
If option 1 selected:
- List specific improvements to make
- Make edits to SKILL.md or reference files
- Re-run validation (Step 6)
- Offer to test again with same or different scenario
If option 2 or 3 selected:
- Proceed to Step 8
Wait for user scenario: "Please describe the scenario you'd like to test with."
Acknowledge scenario: "I'll test the skill with: [user scenario]"
Invoke skill with user's scenario
Follow observation and iteration process as above
Note in output: "Testing skipped - recommend testing during first actual use"
Provide testing reminder:
When you first use this skill, observe: - Is the guidance immediately actionable? - Are there any unclear instructions? - Is any critical information missing? If issues found, you can improve the skill by editing: $SKILLS_PATH/[skill-name]/SKILL.mdProceed to Step 8
Testing adds 2-5 minutes but can save hours of confusion later.
Missing context: Skill assumes knowledge user doesn't have
- Fix: Add
<context>section with background
- Fix: Add
Steps out of order: Workflow jumps around
- Fix: Reorder steps to follow actual execution sequence
Examples too abstract: Code examples don't match real use cases
- Fix: Use more realistic, specific examples
Missing error handling: Doesn't address what to do when things fail
- Fix: Add
<troubleshooting>or error handling guidance
- Fix: Add
Terminology mismatch: Skill uses different terms than user expects
- Fix: Add
<terminology>section or adjust language
- Fix: Add
Too verbose or too terse: Wrong level of detail
- Fix: Adjust based on complexity (intelligence rules)
Location: $COMMANDS_PATH/{skill-name}.md (determined in Step 0.5)
Template:
---
description: {Brief description of what the skill does}
argument-hint: [{argument description}]
allowed-tools: Skill({skill-name})
---
<objective>
Delegate {task} to the {skill-name} skill for: $ARGUMENTS
This routes to specialized skill containing patterns, best practices, and workflows.
</objective>
<process>
1. Use Skill tool to invoke {skill-name} skill
2. Pass user's request: $ARGUMENTS
3. Let skill handle workflow
</process>
<success_criteria>
- Skill successfully invoked
- Arguments passed correctly to skill
</success_criteria>
The slash command's only job is routing—all expertise lives in the skill.
Core principles: references/core-principles.md
- XML structure (consistency, parseability, Claude performance)
- Conciseness (context window is shared)
- Degrees of freedom (matching specificity to task fragility)
- Model testing (Haiku vs Sonnet vs Opus)
Skill structure: references/skill-structure.md
- XML structure requirements
- Naming conventions
- Writing effective descriptions
- Progressive disclosure patterns
- File organization
Workflows and validation: references/workflows-and-validation.md
- Complex workflows with checklists
- Feedback loops (validate → fix → repeat)
- Plan-validate-execute pattern
Common patterns: references/common-patterns.md
- Template patterns
- Examples patterns
- Consistent terminology
- Anti-patterns to avoid
Executable code: references/executable-code.md
- When to use utility scripts
- Error handling in scripts
- Package dependencies
- MCP tool references
API security: references/api-security.md
- Preventing credentials from appearing in chat
- Using the secure API wrapper
- Adding new services and operations
- Credential storage patterns
Iteration and testing: references/iteration-and-testing.md
- Evaluation-driven development
- Claude A/B development pattern
- Observing how Claude navigates Skills
- XML structure validation during testing
Prompting fundamentals:
- Valid YAML frontmatter with descriptive name and comprehensive description
- Pure XML structure with no markdown headings in body
- Required tags: objective, quick_start, success_criteria
- Conditional tags appropriate to complexity level
- Progressive disclosure (SKILL.md < 500 lines, details in reference files)
- Clear, concise instructions that assume Claude is smart
- Real-world testing and iteration based on observed behavior
- Lightweight slash command wrapper for discoverability