name: audit-coordinator description: Orchestrates comprehensive audits across multiple specialized auditors for Claude Code customizations and provides guidance on naming, organization, best practices, and troubleshooting. Use when: (1) Auditing - wants complete evaluation, multi-faceted analysis, coordinated audit reports, thorough validation, or asks to audit multiple components; (2) Guidance - asks "what should I name...", "how should I organize...", "best practices for...", troubleshooting issues, understanding evaluation criteria, or needs pre-deployment validation. Automatically determines which auditors to invoke (agent-audit, skill-audit, hook-audit, command-audit, output-style-audit, evaluator, test-runner) based on target type and compiles unified reports with consolidated recommendations. allowed-tools: [Read, Glob, Grep, Bash, Skill, Task] model: sonnet
Reference Files
Audit Orchestration
- workflow-patterns.md - Multi-auditor invocation patterns and decision matrix
- report-compilation.md - Unified report structure and priority reconciliation
Evaluation Standards and Troubleshooting
- evaluation-criteria.md - Comprehensive standards for each component type
- common-issues.md - Frequent problems and specific fixes with examples
- anti-patterns.md - Common mistakes to avoid when building customizations
Shared References (Used by All Authoring Skills)
- naming-conventions.md - Patterns for agents, commands, skills, hooks, and output-styles
- frontmatter-requirements.md - Complete YAML specification for each component type
- when-to-use-what.md - Decision guide for choosing agents vs skills vs commands vs output-styles
- file-organization.md - Directory structure and layout best practices
- hook-events.md - Hook event types and timing reference
- customization-examples.md - Real-world examples across all component types
Audit Coordinator
Orchestrates comprehensive audits by coordinating multiple specialized auditors and compiling their findings into unified reports.
Available Auditors
The audit ecosystem includes:
claude-code-evaluator (Agent)
Purpose: General correctness, clarity, and effectiveness validation Scope: All customization types Focus: YAML validation, required fields, structure, naming conventions, context economy Invocation: Via Task tool with subagent_type='claude-code-evaluator'
skill-audit (Skill)
Purpose: Skill discoverability and triggering effectiveness Scope: Skills only Focus: Description quality, trigger phrase coverage, progressive disclosure, discovery score Invocation: Via Skill tool or auto-triggers on skill-related queries
hook-audit (Skill)
Purpose: Hook safety, correctness, and performance Scope: Hooks only Focus: JSON handling, exit codes, error handling, performance, settings.json registration Invocation: Via Skill tool or auto-triggers on hook-related queries
claude-code-test-runner (Agent)
Purpose: Functional testing and execution validation Scope: All customization types Focus: Test generation, execution, edge cases, integration testing Invocation: Via Task tool with subagent_type='claude-code-test-runner'
agent-audit (Skill)
Purpose: Agent-specific validation for model selection, tool restrictions, and focus areas Scope: Agents only Focus: Model appropriateness (Sonnet/Haiku/Opus), tool permissions, focus area quality, approach completeness Invocation: Via Skill tool or auto-triggers on agent-related queries
command-audit (Skill)
Purpose: Command delegation and simplicity validation Scope: Commands only Focus: Delegation clarity, simplicity enforcement (6-80 lines), argument handling, documentation proportionality Invocation: Via Skill tool or auto-triggers on command-related queries
output-style-audit (Skill)
Purpose: Output-style persona and behavior validation Scope: Output-styles only Focus: Persona definition clarity, behavior specification concreteness, keep-coding-instructions decision, scope alignment Invocation: Via Skill tool or auto-triggers on output-style-related queries
Orchestration Workflow
Step 1: Identify Target Type
Determine what needs auditing:
Single File:
- Agent file (*.md in agents/)
- Skill (SKILL.md in skills/*/SKILL.md)
- Hook (.sh or.py in hooks/)
- Command (*.md in commands/)
- Output-style (*.md in output-styles/)
Multiple Files:
- All skills
- All hooks
- All agents
- Entire setup
Context Clues:
- File path mentioned
- Type specified ("audit my hook", "check this skill")
- General request ("audit my setup", "review everything")
Step 2: Determine Appropriate Auditors
Use decision matrix based on target type:
Agent:
- Primary: agent-audit (model, tools, focus areas, approach)
- Secondary: claude-code-evaluator (structure)
- Optional: claude-code-test-runner (if testing requested)
Skill:
- Primary: skill-audit (discoverability)
- Secondary: claude-code-evaluator (structure)
- Optional: claude-code-test-runner (functionality)
Hook:
- Primary: hook-audit (safety and correctness)
- Secondary: claude-code-evaluator (structure)
Command:
- Primary: command-audit (delegation, simplicity, arguments)
- Secondary: claude-code-evaluator (structure)
Output-Style:
- Primary: output-style-audit (persona, behaviors, coding-instructions)
- Secondary: claude-code-evaluator (structure)
- Optional: claude-code-test-runner (effectiveness)
Setup (All):
- agent-audit (all agents)
- skill-audit (all skills)
- hook-audit (all hooks)
- command-audit (all commands)
- output-style-audit (all output-styles)
- claude-code-evaluator (comprehensive)
- Can run in parallel
Step 3: Invoke Auditors
Execute auditors in appropriate sequence:
Sequential (when results depend on each other):
skill-audit → claude-code-evaluator → test-runner
Parallel (when independent):
skill-audit (all skills) || hook-audit (all hooks) || evaluator (agents/commands)
Single (when only one needed):
hook-audit → done
Step 4: Compile Reports
Collect findings from all auditors and create unified report.
Step 5: Generate Unified Summary
Consolidate recommendations by priority and provide next steps.
Target-Specific Patterns
Pattern: Single Skill Audit
User Query: "Audit my bash-audit skill"
Workflow:
- Invoke skill-audit for discoverability analysis
- Invoke claude-code-evaluator for structure validation
- Compile reports
- Generate unified recommendations
Output:
- Discovery score
- Structure assessment
- Progressive disclosure status
- Consolidated recommendations
Pattern: Single Hook Audit
User Query: "Check my validate-config.py hook"
Workflow:
- Invoke hook-audit for safety and correctness
- Optionally invoke evaluator for structure
- Compile reports
- Generate unified recommendations
Output:
- Safety compliance status
- Exit code correctness
- Error handling assessment
- Performance analysis
- Consolidated recommendations
Pattern: Setup-Wide Audit
User Query: "Audit my entire Claude Code setup"
Workflow:
- Invoke claude-code-evaluator for comprehensive setup analysis
- Invoke skill-audit for all skills
- Invoke hook-audit for all hooks
- Run in parallel when possible
- Compile all reports
- Generate prioritized recommendations
Output:
- Setup summary (counts, sizes, context usage)
- Component-specific findings
- Cross-cutting issues
- Prioritized action items
Pattern: Multiple Component Types
User Query: "Audit all my skills and hooks"
Workflow:
- Invoke skill-audit for all skills (can run in parallel)
- Invoke hook-audit for all hooks (can run in parallel)
- Compile reports
- Generate unified summary
Output:
- Skills: Discovery scores, structure
- Hooks: Safety compliance, performance
- Consolidated recommendations
Report Compilation
When multiple auditors run, compile findings:
Consolidation Strategy
- Collect all findings from each auditor
- Group by severity: Critical → Important → Nice-to-Have
- Deduplicate similar issues across auditors
- Reconcile priorities when auditors disagree
- Generate unified recommendations
Priority Reconciliation
When different auditors assign different priorities:
Rule 1: Critical from any auditor → Critical overall Rule 2: Important + Important → Critical Rule 3: Important + Nice-to-Have → Important Rule 4: Nice-to-Have + Nice-to-Have → Nice-to-Have
Unified Report Structure
# Comprehensive Audit Report
**Target**: {what was audited}
**Date**: {YYYY-MM-DD HH:MM}
**Auditors**: {list of auditors invoked}
## Executive Summary
{1-2 sentence overview of findings}
## Overall Status
**Health Score**: {composite score}
- {Auditor 1}: {status}
- {Auditor 2}: {status}
- {Auditor 3}: {status}
## Critical Issues
{Must-fix issues from any auditor}
## Important Issues
{Should-fix issues}
## Nice-to-Have Improvements
{Polish items}
## Detailed Findings by Component
### {Component 1}
{Findings from relevant auditors}
### {Component 2}
{Findings from relevant auditors}
## Prioritized Action Items
1. **Critical**: {consolidated must-fix items}
2. **Important**: {consolidated should-fix items}
3. **Nice-to-Have**: {consolidated polish items}
## Next Steps
{Specific, actionable next steps}
Quick Usage Examples
Audit a skill:
User: "Audit my bash-audit skill"
Assistant: [Invokes skill-audit, evaluator; compiles report]
Audit a hook:
User: "Check my validate-config.py hook"
Assistant: [Invokes hook-audit; generates report]
Audit entire setup:
User: "Audit my complete Claude Code setup"
Assistant: [Invokes evaluator, skill-audit, hook-audit in parallel; compiles comprehensive report]
Audit multiple skills:
User: "Check all my skills for discoverability"
Assistant: [Invokes skill-audit for each skill; generates consolidated report]
Integration with Other Auditors
With skill-audit
When to use together:
- Comprehensive skill analysis
- Combining discoverability + structure validation
Sequence: skill-audit → evaluator Output: Discovery score + structure assessment
With hook-audit
When to use together:
- Complete hook validation
- Safety + structure analysis
Sequence: hook-audit → evaluator (optional) Output: Safety compliance + structure validation
With claude-code-evaluator
When to use together:
- Always, for structural validation
- Complements specialized auditors
Sequence: Specialized auditor first, then evaluator Output: Specialized analysis + general validation
With claude-code-test-runner
When to use together:
- Functional validation requested
- After structure/discovery validation
Sequence: Other auditors → test-runner Output: Design validation + functional testing
Decision Matrix
Quick reference for which auditors to invoke:
| Target | Primary Auditor | Secondary | Optional | Sequence |
|---|---|---|---|---|
| Skill | skill-audit | evaluator | test-runner | Sequential |
| Hook | hook-audit | evaluator | - | Sequential |
| Agent | agent-audit | evaluator | test-runner | Sequential |
| Command | command-audit | evaluator | - | Sequential |
| Output-Style | output-style-audit | evaluator | test-runner | Sequential |
| All Skills | skill-audit | evaluator | - | Parallel |
| All Hooks | hook-audit | evaluator | - | Parallel |
| All Agents | agent-audit | evaluator | - | Parallel |
| All Commands | command-audit | evaluator | - | Parallel |
| Setup | all specialized | evaluator | test-runner | Parallel |
Summary
Audit Coordinator Benefits:
- Automatic auditor selection - Chooses right auditors for target
- Parallel execution - Runs independent audits concurrently
- Unified reporting - Compiles findings from multiple sources
- Priority reconciliation - Consolidates conflicting priorities
- Comprehensive coverage - Ensures all relevant checks performed
Best for: Multi-component audits, setup-wide analysis, coordinated validation
For detailed orchestration patterns, see workflow-patterns.md. For report compilation guidance, see report-compilation.md.
Guidance Workflows
Beyond orchestrating audits, this skill provides guidance on Claude Code customization standards and best practices.
Pattern: Naming Guidance
User Query: "What should I name my new agent that reviews security?"
Workflow:
- Identify component type (agent, skill, command, hook, output-style)
- Reference naming conventions (see shared reference: naming-conventions.md)
- Provide specific name suggestions with rationale
- Offer examples of similar components
- Explain naming pattern (e.g., {domain}-{role} for agents)
Output: Concrete name suggestions + pattern explanation
Pattern: Organization Guidance
User Query: "How should I organize my skill's reference files?"
Workflow:
- Assess current structure
- Reference file organization standards (see shared reference: file-organization.md)
- Apply progressive disclosure principles
- Recommend structure improvements
- Provide migration guidance if restructuring needed
Output: Recommended directory structure + migration steps
Pattern: Pre-Deployment Validation
User Query: "Does this new skill look good?" (while editing SKILL.md)
Workflow:
- Read target file
- Invoke appropriate auditor (skill-audit, agent-audit, etc.)
- Check against evaluation criteria (see references/evaluation-criteria.md)
- Flag blocking issues (missing fields, invalid YAML, critical violations)
- Provide quick validation summary
Output: Go/No-Go decision + critical fixes needed
Pattern: Troubleshooting Guidance
User Query: "Why isn't my skill being discovered?"
Workflow:
- Identify symptom (not discovered, not triggering, errors, etc.)
- Reference common issues guide (see references/common-issues.md)
- Reference anti-patterns (see references/anti-patterns.md)
- Provide specific diagnosis and fix
- Offer to run diagnostic audit if needed
Output: Diagnosis + specific fix + optional follow-up audit
Pattern: Best Practices Consultation
User Query: "What are best practices for agents?"
Workflow:
- Identify component type
- Reference evaluation criteria (see references/evaluation-criteria.md)
- Provide component-specific best practices
- Give concrete examples of good patterns
- Point to anti-patterns to avoid
Output: Best practices summary + examples + anti-patterns