| name | code-review |
| description | Battle-tested code review practices optimizing for codebase health and team velocity |
| version | 1.0.0 |
| author | Claude Assistant |
| tags | code-review, pr, quality, workflow, team, best-practices |
Code review is the primary quality gate in modern software development. Poor review practices lead to slow feature velocity, missed defects, team friction, and knowledge silos. This skill codifies proven practices that prevent these failure modes.
Key insights from research:
- Review effectiveness drops dramatically beyond 200-400 LOC or 60-90 minutes
- Fast review turnaround correlates with better delivery performance
- Clear comment classification reduces author-reviewer friction
- Two-pass review (design first, details second) catches more issues
- Modern code review spreads knowledge and raises team awareness—not just defect hunting
The skill balances prescriptive guidance with flexibility—providing clear workflows without being overly rigid.
Core Principles
What Authors Should Do
Use this format (skim-friendly):
## Context
<!-- Why is this needed? What problem does it solve? -->
<!-- Link to issue/ticket if applicable -->
## Changes
<!-- High-level summary of what changed (not every line) -->
- Key change 1
- Key change 2
- Key change 3
## Test Plan
<!-- How can reviewers verify this works? -->
- [ ] Unit tests: `npm test path/to/tests`
- [ ] Manual testing: Steps to reproduce
- [ ] Edge cases verified: List specific scenarios tested
## Deployment Notes
- [ ] Database migration: Yes/No (details if yes)
- [ ] Feature flag: Yes/No (name if yes)
- [ ] Configuration changes: Yes/No (what changed)
- [ ] Breaking changes: Yes/No (migration path if yes)
- [ ] Rollback plan: Describe how to safely revert
## Security Considerations
<!-- If touching auth/permissions/data/secrets -->
- Input validation strategy
- Authorization checks
- Data exposure risks addressed
What Reviewers Should Do
Focus on big picture before diving into details:
Read PR Description
- Understand context: Why is this needed?
- Verify approach makes sense
- Check scope is appropriate
Assess Design & Architecture
- Does this solve the right problem?
- Is the approach sound?
- Any architectural concerns?
- Fits existing patterns or justified deviation?
Security & Risk Pass
- Any obvious security vulnerabilities?
- Authentication/authorization correct?
- Data validation present where needed?
- Use
/review-securityfor auth/permissions/data changes
Scope Check
- PR focused on single concern?
- Size reasonable (< 400 lines ideal)?
- No unrelated changes?
Decision Point: If major issues found (wrong approach, architectural problems, security flaws), provide feedback and STOP. No need for detailed review if design is flawed.
Only proceed if Pass 1 looks good:
Logic & Correctness
- Logic is correct and handles edge cases
- Error handling is appropriate
- State changes are safe and intentional
- Use
/review-qualityfor TypeScript/logic review
Testing Quality
- Tests cover new functionality
- Tests are meaningful (not just for coverage)
- Edge cases and error scenarios tested
- Use
/review-testingfor test quality analysis
Code Quality & Readability
- Code is readable and maintainable
- Naming is clear and consistent
- No obvious performance issues
- Use
/review-readabilityfor style/naming analysis
Documentation
- Public APIs have JSDoc comments
- Complex logic has explanatory comments
- README/docs updated if needed
Time Management: Set 30-minute timer. If you can't complete review in time:
- Stop and comment: "This is more complex than I can review in one session. Tagging @teammate."
- Or: Request PR be split if too large
- Or: Schedule dedicated review time and let author know
Comment Classification System
[critical] - Must fix before merge
- Security vulnerabilities
- Data loss or corruption risks
- Breaking changes without migration path
- Incorrect core logic
[issue] - Should fix, discuss if cannot
- Bugs or logic errors
- Missing error handling
- Significant performance problems
- Missing tests for core functionality
[suggestion] - Nice to have, author decides
- Better patterns or approaches
- Performance optimizations
- Refactoring opportunities
- Code organization improvements
[nit] - Purely cosmetic, optional
- Variable naming preferences
- Code style (should be automated)
- Whitespace, formatting
- Comment typos
[question] - Seeking clarification
- Understanding intent
- Asking about edge cases
- Requesting explanation of approach
- Verifying assumptions
[praise] - Calling out good work
- Excellent patterns worth highlighting
- Clever solutions
- Good test coverage
- Clear documentation
Final Decision
After both passes (< 30 minutes total):
- Approve: No blocking issues, ready to merge (or after minor nits addressed)
- Request Changes: Critical or issue-level items must be fixed before merge
- Comment: Leaving feedback but deferring final decision to another reviewer
Batch Nits: Leave all [nit] comments in single comment block. Don't require changes—let author decide.
Praise Matters: Call out good patterns. Mentors > nit-bots. Positive reinforcement builds better teams.
Team-Level Patterns
Define and track response time expectations:
- First response: < 24 hours (business days) for regular PRs
- Urgent PRs: < 2 hours (clearly marked as urgent with justification)
- Merge timeline:
- Small PRs (< 200 lines): < 2 days
- Medium PRs (200-400 lines): < 5 days
- Large PRs (> 400 lines): < 10 days or request split
Fast cycles correlate with healthier delivery metrics.
Size Policies
Enforce PR size limits:
- Target: < 300-500 lines changed
- Hard limit: > 800 lines requires explicit justification
- Exceptions: Refactors, generated code, dependency updates (explain in description)
- Use
/code-review-prepto check PR size before opening
CODEOWNERS
Implement automatic reviewer assignment:
- Define ownership boundaries in
.github/CODEOWNERS - Route reviews to domain experts
- Avoid bystander apathy (clear responsibility)
- Use
/code-review-initto generate CODEOWNERS template
Suggested Changes
Enable "suggested changes" feature:
- Reviewers offer ready-to-apply edits
- Authors apply with one click or convert to discussion
- Reduces back-and-forth for simple fixes
- Batch multiple suggestions together
Automate the Noise
Move mechanical checks to CI:
- Pre-commit hooks for formatting
- CI checks for tests/lint/types
- Secret scanning and SAST
- Dependency vulnerability scanning
Keep human attention for design and correctness.
Checklists
Author Pre-PR Checklist
Use /code-review-prep to run automated version, or manually verify:
Code Quality
- All tests pass locally
- No linting errors
- TypeScript compiles with no errors
- No debug code (console.log, debugger, TODO unless tracked)
Self-Review
- Reviewed own diff line-by-line
- All changes are intentional and necessary
- No unrelated changes included
- Commit messages are clear
Testing
- Added tests for new functionality
- Updated tests for changed functionality
- Tested edge cases and error scenarios
- Manual testing completed
Documentation
- Updated README if needed
- Added JSDoc for public APIs
- Updated architecture docs if significant changes
Security & Performance
- No hardcoded secrets or API keys
- Input validation where needed
- Performance impact considered
- Ran
/review-securityif touching auth/permissions
PR Description
- Context explains why
- Changes summarize what
- Test plan enables verification
- Deployment notes identify risks
- Rollback plan documented
Size & Scope
- PR < 400 lines or justified
- Single, clear purpose
- Large refactors split into phases
Reviewer Checklist
Pass 1: High-Level (5-10 min)
- Read PR description, understand context
- Verify change solves stated problem
- Check approach is sound
- No architectural concerns
- Security check (use
/review-securityif needed) - Scope is focused and size reasonable
Pass 2: Implementation (10-20 min)
- Logic correct, handles edge cases
- Error handling appropriate
- Tests cover functionality (use
/review-testing) - Code quality good (use
/review-quality) - Naming clear (use
/review-readability) - Documentation adequate
Final Decision
- Classified all comments ([critical], [issue], [suggestion], [nit], [question])
- Left at least one [praise] comment if applicable
- Decision: Approve / Request Changes / Comment
Anti-Patterns to Avoid
Reviewer Anti-Patterns:
- Nitpicking (blocking on style instead of using tooling)
- Rubber stamping (approving without reading)
- Perfectionism (endless change requests)
- Ghost reviewing (assigned but never responds)
Team Anti-Patterns:
- Ownership vacuum (no clear reviewers)
- Approval gauntlet (requiring too many approvals)
- Process bypass ("urgent" merges without review)
- Metric gaming (optimizing for wrong numbers)
Integration with Claude Code
Review Orchestrator:
/review-orchestrator commit- Full pre-commit validation/review-orchestrator push- Pre-push quality gates
Specialist Reviewers:
/reviewer:security- Security vulnerabilities and auth/reviewer:quality- TypeScript quality and logic/reviewer:testing- Test effectiveness and coverage/reviewer:readability- Code clarity and naming/reviewer:basic- Anti-patterns and common mistakes
Custom Commands:
/code-review-prep- Author pre-flight checklist (to be created)/code-review-init- Generate PR templates and CODEOWNERS (to be created)/code-review-metrics- Analyze review health metrics (to be created)
Workflow Integration:
# Author workflow
git add .
/review-orchestrator add # Pre-commit check
git commit -m "feat: add feature"
/code-review-prep # Pre-PR checklist
# Open PR with generated description
# Reviewer workflow
/review-security path/to/files # If security-sensitive
/review-quality path/to/files # Logic and types
/review-testing path/to/files # Test quality
# Leave classified comments, make decision
Metrics That Matter
Don't track:
- Raw comment counts (encourages nitpicking)
- Lines of code reviewed (encourages large batches)
- Individual reviewer speed (creates perverse incentives)
Use /code-review-metrics to generate health dashboard.
Quick Reference
As a Reviewer:
- Check PR size and description first
- Pass 1: Design & approach (5-10 min)
- Pass 2: Implementation & details (10-20 min)
- Classify all comments ([critical], [issue], [suggestion], [nit], [question], [praise])
- Make decision: Approve / Request Changes / Comment
- Total time budget: < 30 minutes
As a Team Lead:
- Generate PR template:
/code-review-init - Set up CODEOWNERS for automatic assignment
- Define SLAs (24h first response, 5d merge for medium PRs)
- Track metrics:
/code-review-metrics - Address anti-patterns proactively
- Celebrate good reviews (not just good code)
Additional Resources
Templates:
- PR Template - GitHub/GitLab PR template
- CODEOWNERS Template - Code ownership configuration
- Author Checklist - Pre-PR validation
- Reviewer Checklist - Two-pass review process
Workflows:
- Pre-Commit Review - Individual validation
- Pre-PR Review - Author preparation
- Post-PR Review - Reviewer process
Metrics & Improvement:
- Anti-Patterns - Common failures and remediation
- Success Metrics - Measuring review effectiveness
Usage Examples
Example 1: Author Preparing PR
# After completing feature work
git add .
git commit -m "feat: add user profile editing"
# Run pre-PR validation
@skill code-review
> I'm about to open a PR. Can you run the author checklist?
# Claude runs automated checks and generates:
# - Validation results (tests, lint, types)
# - PR size analysis
# - Generated PR description template
# - Suggested reviewers from CODEOWNERS
# - Readiness assessment
Example 2: Reviewer Conducting Two-Pass Review
# Assigned to review PR #456 (324 lines, auth changes)
@skill code-review
> I need to review PR #456. It modifies authentication. Let's do Pass 1.
# Claude guides through high-level review:
# - Analyzes context and approach
# - Runs `/review-security` for auth changes
# - Identifies any design issues
# - Recommends proceed to Pass 2 or provide feedback
# If Pass 1 looks good:
> Pass 1 looks good. Let's do Pass 2.
# Claude runs specialist reviewers in parallel:
# - `/review-quality` for logic and types
# - `/review-testing` for test coverage
# - `/review-readability` for code clarity
# - Aggregates findings with severity classification
Example 3: Team Lead Initializing for Team
@skill code-review
> We want to improve our code review process. Help us get started.
# Claude runs `/code-review-init`:
# - Generates .github/pull_request_template.md
# - Generates .github/CODEOWNERS
# - Creates docs/code-review-playbook.md
# - Provides team adoption roadmap
# - Suggests SLAs and policies
Note: This skill is designed to work with your existing development workflow. Start with the author checklist, adopt two-pass reviews, then expand to team-wide practices as you gain confidence.