| name | marketplace-analysis |
| description | Use when reviewing plugin quality, auditing plugins, analyzing the marketplace, checking plugins against Anthropic standards, or evaluating plugin architecture - provides systematic analysis methodology with validation framework |
Marketplace Analysis
Analyze Claude Code plugins to achieve Anthropic-level quality standards.
Core Philosophy
Anthropic Quality Bar: Same or more functionality with leaner, more efficient implementation.
Principles:
- Systems thinking over point fixes
- Elegant simplicity over feature accumulation
- Proven improvements over assumptions
- Deletion over addition
Analysis Process
1. Quick Scan
- Count plugins and components
- Note obvious issues (large files, naming inconsistencies)
- Flag files >500 lines
2. Deep Analysis (per plugin)
- Read SKILL.md files - check trigger phrases, writing style
- Read agent descriptions - check triggering examples
- Read commands - check argument handling
- Check hooks - validate event usage
- Map interactions - how components work together
3. Cross-Plugin Analysis
- Find redundancy across plugins
- Check consistency (naming, patterns, styles)
- Identify gaps and conflicts
4. Reference Validation
For each skill, verify bundled references exist:
Extract paths from SKILL.md:
references/*.mdmentionsscripts/*.shorscripts/*.pymentions- Markdown links:
[text](relative/path)
Validate each path:
- Resolve relative to skill directory
- Check file exists with Glob
- Flag missing as "broken reference"
Report:
- Missing references = Priority 1 errors
- Orphaned files (exist but not referenced) = Priority 3 notes
Anti-Overengineering Checks
Before proposing ANY change:
- Is this simpler than the original?
- Does this solve a real problem?
- Would a new user understand this?
- Can I remove instead of add?
Red flags:
- Adding abstraction for one use case
- "Might need this later" reasoning
- Recommending deletion based on filename alone
Output Format
## Priority 1: High Impact, Low Effort
- [ ] [Change] - [Why] - [Expected impact] - [How to validate]
## Priority 2: Medium Impact
...
## Priority 3: Consider Later
...
Each recommendation includes validation approach.
References
For detailed guidance:
references/skill-design-standards.md- Official Anthropic skill-creator guide (authoritative source for skill structure, frontmatter, progressive disclosure)references/quality-standards.md- Quality criteria checklist, anti-patterns (includes summary of official standards)references/measuring-improvements.md- Metrics, user testing, validation templatesreferences/output-patterns.md- Template and examples patterns for consistent outputreferences/workflows.md- Sequential and conditional workflow patterns
Use scripts/analyze-metrics.sh for consistent metric collection.
Consulting Documentation
Verify best practices via claude-code-guide subagent before claiming something is "wrong."
Applying Changes
When implementing improvements:
- Before any changes: Create TodoWrite items for each improvement
- Apply changes: Use Edit tool, one logical change at a time
- MANDATORY verification: Use
core:verificationskill before claiming complete - Evidence required: Run validation commands, report actual output
Never claim "improved" or "fixed" without verification evidence.