Claude Code Plugins

Community-maintained marketplace

Feedback

Review generated code for bugs, security issues, performance, and best practices. Use when reviewing Claude-generated code, checking for vulnerabilities, auditing implementation quality, or validating code changes before commit.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name code-review
description Review generated code for bugs, security issues, performance, and best practices. Use when reviewing Claude-generated code, checking for vulnerabilities, auditing implementation quality, or validating code changes before commit.

Code Review Skill

A structured code review skill for analyzing generated implementations, focusing on quality, security, maintainability, and alignment with project standards.

When This Skill Activates

  • Reviewing code generated by Claude Code or other AI agents
  • Auditing for security vulnerabilities
  • Checking code quality and best practices
  • Validating against project standards
  • Assessing performance implications
  • Before committing significant changes

Review Methodology

Analysis Focus Areas

1. Code Quality

  • Structure and readability
  • Naming conventions (PascalCase classes, snake_case functions)
  • Appropriate abstraction levels
  • DRY principle adherence
  • SOLID principles compliance

2. Security & Safety

  • Input validation and sanitization
  • Authentication/authorization checks
  • Error handling coverage
  • OWASP Top 10 vulnerability check
  • No hardcoded secrets
  • SQL injection prevention (SQLAlchemy ORM usage)
  • Path traversal prevention

3. Performance

  • Algorithm efficiency (O(n) analysis)
  • Database query optimization (N+1 prevention)
  • Memory usage patterns
  • Caching opportunities
  • Async/await for I/O operations

4. Maintainability

  • Test coverage adequacy
  • Type hints on all functions
  • Google-style docstrings
  • Exception handling
  • Pydantic schema usage

5. Standards Compliance

  • Project coding standards (CLAUDE.md)
  • Framework conventions (FastAPI patterns)
  • ACGME compliance (if touching scheduling)
  • HIPAA/PERSEC considerations

Review Process

Step 1: Context Gathering

# Understand the change scope
git diff HEAD~1 --stat
git diff HEAD~1 --name-only

# Read the changed files
git diff HEAD~1 <file>

Step 2: Architecture Review

Check layered architecture compliance:

Route (thin) -> Controller -> Service -> Repository -> Model

Questions to ask:

  • Does business logic stay in services?
  • Are database operations async?
  • Are Pydantic schemas used for validation?

Step 3: Static Analysis

cd /home/user/Autonomous-Assignment-Program-Manager/backend

# Linting
ruff check <file> --show-source

# Type checking
mypy <file> --python-version 3.11

# Security scan
bandit -r <file> -ll

Step 4: Pattern Matching

Check for common issues:

Pattern Issue Fix
== True Explicit boolean Use if var:
Missing await Sync call in async Add await
Bare except: Catches all Specific exceptions
Any type Type escape Proper typing
Unused variable Dead code Remove or use _

Step 5: Security Deep Dive

# Check for these patterns:

# BAD - SQL injection risk
query = f"SELECT * FROM users WHERE id = {user_id}"

# GOOD - Parameterized
query = select(User).where(User.id == user_id)

# BAD - Path traversal
file_path = base_dir + user_input

# GOOD - Validated path
file_path = validate_path(base_dir, user_input)

# BAD - Sensitive data in error
raise HTTPException(detail=f"User {email} not found")

# GOOD - Generic error
raise HTTPException(detail="User not found")

Output Format

Finding Categories

Use these severity levels:

Level Icon Meaning
CRITICAL :red_circle: Security vulnerability or major bug - must fix
WARNING :yellow_circle: Code quality issue with production impact
INFO :blue_circle: Best practices and optimization suggestions
GOOD :white_check_mark: Well-implemented patterns worth highlighting

Review Report Template

## Code Review Summary

**Files Reviewed:** [count]
**Overall Assessment:** [PASS / NEEDS CHANGES / BLOCK]

### Critical Issues (Must Fix)
1. [File:line] - Description
   - Impact: [what could go wrong]
   - Fix: [specific suggestion]

### Warnings (Should Fix)
1. [File:line] - Description
   - Suggestion: [how to improve]

### Recommendations (Nice to Have)
1. [File:line] - Description

### Good Patterns Observed
1. [File:line] - Description of well-implemented code

### Summary Checklist
- [ ] All type hints present
- [ ] Tests added for new code
- [ ] No security issues
- [ ] Follows layered architecture
- [ ] Async operations correct
- [ ] Error handling appropriate

Integration with Existing Skills

With automated-code-fixer

When critical or warning issues found:

  1. Document the issue
  2. Trigger automated-code-fixer skill
  3. Re-run review after fix
  4. Verify quality gates pass

With code-quality-monitor

Before final approval:

# Run full quality check
cd /home/user/Autonomous-Assignment-Program-Manager/backend
pytest --tb=no -q && ruff check app/ && mypy app/

With security-audit

For security-sensitive changes:

  1. Defer to security-audit skill
  2. Require additional review for auth/crypto code
  3. Escalate HIPAA/PERSEC concerns

Escalation Rules

Escalate to human when:

  1. Changes touch authentication or authorization
  2. Database schema modifications detected
  3. ACGME compliance logic affected
  4. Cryptographic code modified
  5. Third-party API credentials handling
  6. Unclear business logic requirements
  7. Multiple interdependent changes

Quick Review Commands

# Full review suite
cd /home/user/Autonomous-Assignment-Program-Manager/backend

# 1. Check syntax and imports
ruff check <file> --select F,I

# 2. Check security
bandit -r <file> -ll

# 3. Check types
mypy <file> --ignore-missing-imports

# 4. Check tests exist
pytest --collect-only tests/test_<module>.py

# 5. Run related tests
pytest tests/test_<module>.py -v

Common Review Patterns

Python Backend

# REVIEW: Ensure async/await pattern
async def get_item(db: AsyncSession, item_id: str) -> Optional[Item]:
    result = await db.execute(select(Item).where(Item.id == item_id))
    return result.scalar_one_or_none()

# REVIEW: Check Pydantic schema usage
@router.post("/items", response_model=ItemResponse)
async def create_item(
    item: ItemCreate,  # Pydantic input validation
    db: AsyncSession = Depends(get_db),
    current_user: User = Depends(get_current_user)
) -> ItemResponse:
    pass

TypeScript Frontend

// REVIEW: Check for proper typing
interface Props {
  scheduleId: string;  // Not 'any'
  onUpdate: (schedule: Schedule) => void;
}

// REVIEW: Error boundaries and loading states
const ScheduleView: React.FC<Props> = ({ scheduleId }) => {
  const { data, error, isLoading } = useQuery(['schedule', scheduleId]);

  if (isLoading) return <Skeleton />;
  if (error) return <ErrorBoundary error={error} />;
  return <Schedule data={data} />;
};