Claude Code Plugins

Community-maintained marketplace

Feedback

writing-skills

@drewpayment/orbit
1
0

Use when creating new skills, editing existing skills, or verifying skills work before deployment

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name writing-skills
description Use when creating new skills, editing existing skills, or verifying skills work before deployment

Writing Skills

Overview

Writing skills IS Test-Driven Development applied to process documentation.

Personal skills live in agent-specific directories (~/.claude/skills for Claude Code, ~/.codex/skills for Codex)

You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).

Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.

REQUIRED BACKGROUND: You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.

What is a Skill?

A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.

Skills are: Reusable techniques, patterns, tools, reference guides

Skills are NOT: Narratives about how you solved a problem once

TDD Mapping for Skills

TDD Concept Skill Creation
Test case Pressure scenario with subagent
Production code Skill document (SKILL.md)
Test fails (RED) Agent violates rule without skill (baseline)
Test passes (GREEN) Agent complies with skill present
Refactor Close loopholes while maintaining compliance
Write test first Run baseline scenario BEFORE writing skill
Watch it fail Document exact rationalizations agent uses
Minimal code Write skill addressing those specific violations
Watch it pass Verify agent now complies
Refactor cycle Find new rationalizations → plug → re-verify

The entire skill creation process follows RED-GREEN-REFACTOR.

When to Create a Skill

Create when:

  • Technique wasn't intuitively obvious to you
  • You'd reference this again across projects
  • Pattern applies broadly (not project-specific)
  • Others would benefit

Don't create for:

  • One-off solutions
  • Standard practices well-documented elsewhere
  • Project-specific conventions (put in CLAUDE.md)
  • Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)

Skill Types

Technique

Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)

Pattern

Way of thinking about problems (flatten-with-flags, test-invariants)

Reference

API docs, syntax guides, tool documentation (office docs)

Directory Structure

skills/
  skill-name/
    SKILL.md              # Main reference (required)
    supporting-file.*     # Only if needed

Flat namespace - all skills in one searchable namespace

Separate files for:

  1. Heavy reference (100+ lines) - API docs, comprehensive syntax
  2. Reusable tools - Scripts, utilities, templates

Keep inline:

  • Principles and concepts
  • Code patterns (< 50 lines)
  • Everything else

SKILL.md Structure

Frontmatter (YAML):

  • Only two fields supported: name and description
  • Max 1024 characters total
  • name: Use letters, numbers, and hyphens only (no parentheses, special chars)
  • description: Third-person, describes ONLY when to use (NOT what it does)
    • Start with "Use when..." to focus on triggering conditions
    • Include specific symptoms, situations, and contexts
    • NEVER summarize the skill's process or workflow (see CSO section for why)
    • Keep under 500 characters if possible
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
---

# Skill Name

## Overview
What is this? Core principle in 1-2 sentences.

## When to Use
[Small inline flowchart IF decision non-obvious]

Bullet list with SYMPTOMS and use cases
When NOT to use

## Core Pattern (for techniques/patterns)
Before/after code comparison

## Quick Reference
Table or bullets for scanning common operations

## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools

## Common Mistakes
What goes wrong + fixes

## Real-World Impact (optional)
Concrete results

Claude Search Optimization (CSO)

Critical for discovery: Future Claude needs to FIND your skill

1. Rich Description Field

Purpose: Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"

Format: Start with "Use when..." to focus on triggering conditions

CRITICAL: Description = When to Use, NOT What the Skill Does

The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.

2. Keyword Coverage

Use words Claude would search for:

  • Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
  • Symptoms: "flaky", "hanging", "zombie", "pollution"
  • Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
  • Tools: Actual commands, library names, file types

3. Descriptive Naming

Use active voice, verb-first:

  • creating-skills not skill-creation
  • condition-based-waiting not async-test-helpers

The Iron Law (Same as TDD)

NO SKILL WITHOUT A FAILING TEST FIRST

This applies to NEW skills AND EDITS to existing skills.

Write skill before testing? Delete it. Start over. Edit skill without testing? Same violation.

No exceptions:

  • Not for "simple additions"
  • Not for "just adding a section"
  • Not for "documentation updates"
  • Don't keep untested changes as "reference"
  • Don't "adapt" while running tests
  • Delete means delete

Skill Creation Checklist (TDD Adapted)

RED Phase - Write Failing Test:

  • Create pressure scenarios (3+ combined pressures for discipline skills)
  • Run scenarios WITHOUT skill - document baseline behavior verbatim
  • Identify patterns in rationalizations/failures

GREEN Phase - Write Minimal Skill:

  • Name uses only letters, numbers, hyphens (no parentheses/special chars)
  • YAML frontmatter with only name and description (max 1024 chars)
  • Description starts with "Use when..." and includes specific triggers/symptoms
  • Description written in third person
  • Keywords throughout for search (errors, symptoms, tools)
  • Clear overview with core principle
  • Address specific baseline failures identified in RED
  • Code inline OR link to separate file
  • One excellent example (not multi-language)
  • Run scenarios WITH skill - verify agents now comply

REFACTOR Phase - Close Loopholes:

  • Identify NEW rationalizations from testing
  • Add explicit counters (if discipline skill)
  • Build rationalization table from all test iterations
  • Create red flags list
  • Re-test until bulletproof

Quality Checks:

  • Small flowchart only if decision non-obvious
  • Quick reference table
  • Common mistakes section
  • No narrative storytelling
  • Supporting files only for tools or heavy reference

Deployment:

  • Commit skill to git and push to your fork (if configured)
  • Consider contributing back via PR (if broadly useful)

The Bottom Line

Creating skills IS TDD for process documentation.

Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.

If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.