Claude Code Plugins

Community-maintained marketplace

Feedback

Perform systematic code reviews. Use when reviewing code quality, security, or best practices.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name code-review
description Perform systematic code reviews. Use when reviewing code quality, security, or best practices.

Code Review Skill

This skill provides a systematic approach to conducting thorough code reviews.

Review Checklist

1. Code Quality

Readability

  • Variable and function names are descriptive
  • Code follows consistent style conventions
  • Complex logic is commented
  • Functions are reasonably sized (< 50 lines)
  • No deeply nested code (> 3 levels)

Structure

  • Code is organized into logical modules
  • Separation of concerns is maintained
  • DRY principle is followed (no code duplication)
  • Single Responsibility Principle is applied

Error Handling

  • All errors are properly handled
  • Error messages are informative
  • No silent failures
  • Resources are cleaned up in error paths

2. Security

Input Validation

  • All user inputs are validated
  • SQL injection prevention (parameterized queries)
  • XSS prevention (output encoding)
  • Path traversal prevention (safe path handling)

Authentication & Authorization

  • Authentication is required where needed
  • Authorization checks are in place
  • Sensitive data is not logged
  • Credentials are not hardcoded

Data Protection

  • Sensitive data is encrypted
  • Passwords are properly hashed
  • HTTPS is used for network communication
  • CSRF protection is implemented

3. Performance

Efficiency

  • No N+1 query problems
  • Appropriate data structures used
  • No unnecessary loops or iterations
  • Database queries are optimized

Resource Management

  • No memory leaks
  • Files and connections are properly closed
  • Large datasets are paginated
  • Caching is used where appropriate

4. Testing

Coverage

  • Unit tests exist for core logic
  • Edge cases are tested
  • Error paths are tested
  • Integration tests for critical flows

Quality

  • Tests are independent
  • Tests are deterministic
  • Mock/stub external dependencies
  • Test names clearly describe what is tested

5. Documentation

Code Documentation

  • Public APIs are documented
  • Complex algorithms are explained
  • TODO/FIXME comments have owner and date
  • README is up to date

Architecture

  • Design decisions are documented
  • API contracts are clear
  • Dependencies are documented
  • Configuration options are documented

Language-Specific Checks

Go

// Good: Clear error handling
func processFile(path string) error {
    f, err := os.Open(path)
    if err != nil {
        return fmt.Errorf("failed to open %s: %w", path, err)
    }
    defer f.Close()
    
    // Process file
    return nil
}

// Bad: Ignored error
func processFileBad(path string) {
    f, _ := os.Open(path)  // ❌ Error ignored
    // Process file
}

Python

# Good: Context manager for resources
def process_file(path):
    with open(path, 'r') as f:
        data = f.read()
        # Process data
    return result

# Bad: Resource not closed
def process_file_bad(path):
    f = open(path, 'r')  # ❌ File not closed
    data = f.read()
    return result

JavaScript/TypeScript

// Good: Async error handling
async function fetchData(url: string): Promise<Data> {
    try {
        const response = await fetch(url);
        if (!response.ok) {
            throw new Error(`HTTP ${response.status}`);
        }
        return await response.json();
    } catch (error) {
        console.error('Fetch failed:', error);
        throw error;
    }
}

// Bad: Unhandled promise rejection
async function fetchDataBad(url: string): Promise<Data> {
    const response = await fetch(url);  // ❌ No error handling
    return await response.json();
}

Common Anti-Patterns

1. Magic Numbers

// Bad
if user.Age >= 18 {  // ❌ What does 18 mean?

// Good
const legalAge = 18
if user.Age >= legalAge {  // ✅ Clear intent

2. God Classes

// Bad: Class does everything
type UserManager struct {}
func (um *UserManager) CreateUser() {}
func (um *UserManager) SendEmail() {}
func (um *UserManager) ProcessPayment() {}
func (um *UserManager) GenerateReport() {}

// Good: Separate responsibilities
type UserService struct {}
type EmailService struct {}
type PaymentService struct {}
type ReportService struct {}

3. Premature Optimization

// Bad: Complex optimization without profiling
func processDataOptimized(data []int) {
    // 50 lines of complex optimization
}

// Good: Start simple, optimize if needed
func processData(data []int) {
    // Simple, clear implementation
    // Add optimization after profiling shows bottleneck
}

4. Callback Hell

// Bad: Nested callbacks
getData(function(a) {
    getMoreData(a, function(b) {
        getMoreData(b, function(c) {
            // ❌ Hard to follow
        });
    });
});

// Good: Promises or async/await
const a = await getData();
const b = await getMoreData(a);
const c = await getMoreData(b);

Review Process

1. Before Review

  • Run linters and formatters
  • Ensure tests pass
  • Check build succeeds
  • Review diff yourself first

2. During Review

  • Read code line by line
  • Check against checklist
  • Test critical paths manually
  • Consider edge cases

3. Providing Feedback

  • Be constructive and specific
  • Explain WHY, not just WHAT
  • Suggest alternatives
  • Acknowledge good patterns

4. Example Feedback

Good:

Consider using a map instead of a slice here. With 1000+ items, lookups will be O(n) with a slice but O(1) with a map. This could impact performance in the common case where we do many lookups.

Bad:

This is wrong. Use a map.

Tools

Static Analysis

# Go
go vet ./...
staticcheck ./...
golangci-lint run

# Python
pylint mycode.py
flake8 mycode.py
mypy mycode.py

# JavaScript
eslint src/

Security Scanning

# Go
gosec ./...

# Python
bandit -r .

# Node.js
npm audit

Code Coverage

# Go
go test -cover ./...

# Python
pytest --cov=mypackage

# JavaScript
jest --coverage

Quick Reference

Priority Levels:

  • 🔴 Critical: Security issues, data loss risks
  • 🟡 Important: Bugs, performance issues
  • 🔵 Nice to have: Style, documentation

Review Time:

  • Small PR (< 100 lines): 15-30 minutes
  • Medium PR (100-300 lines): 30-60 minutes
  • Large PR (> 300 lines): Consider splitting

When to Request Changes:

  • Security vulnerabilities
  • Significant bugs
  • Missing tests for critical logic
  • Breaking changes without migration plan