| name | skill-name |
| description | One-line description of what this skill does |
| triggers | trigger phrase 1, trigger phrase 2, trigger phrase 3 |
[Skill Name]
[2-3 sentence description of what this skill accomplishes and when to use it.]
Prerequisites
Before running this skill:
- Domain expertise engine installed (
pip install -e lib/domain-expertise) - [Other prerequisites]
Input Required
- [Input 1]: [Description]
- [Input 2]: [Description] (optional)
Workflow
Step 1: Gather Information
[How to collect input from user or context]
Step 2: Load Domain Expertise
expertise query [domain] "[relevant query]"
This returns:
- Principles relevant to the task
- Rubric criteria for evaluation
- Contrast examples (WEAK vs STRONG)
Step 3: Execute Task
[Main logic - how to apply expertise to the input]
For evaluation tasks:
- Compare input against rubric criteria
- Score based on scoring guide
- Identify specific issues (quote actual content)
- Generate recommendations
Scoring Guide:
- 9-10: Excellent - [criteria]
- 7-8: Good - [criteria]
- 5-6: Adequate - [criteria]
- 3-4: Poor - [criteria]
- 1-2: Failing - [criteria]
Step 4: Output Results
Return results in this format:
{
"input": "[what was analyzed]",
"score": 0.0,
"summary": "[1-2 sentence summary]",
"strengths": [
"[Specific strength]"
],
"issues": [
"[Specific issue with quote from input]"
],
"recommendations": [
"[OPTIMIZE] [Recommendation for existing element]",
"[ADD NEW] [Recommendation requiring new content]"
]
}
Anti-Fabrication Checklist
Before returning results, verify:
- Every [OPTIMIZE] recommendation references something that exists
- Every [ADD NEW] recommendation is clearly flagged as new work
- Issues quote actual content, not paraphrased versions
- Recommendations don't reference features that don't exist
Example Usage
User: [example trigger phrase]
Claude: [brief example of skill execution and output]