| name | code-review-patterns |
| description | Code review best practices including review checklists (functionality, tests, documentation, security, performance), providing constructive feedback, automated checks integration, and handling review comments. Use when reviewing pull requests, providing code feedback, responding to review comments, or setting up review processes. |
| allowed-tools | Read, Grep, Glob |
Code Review Patterns
Purpose: Guide to effective code reviews with emphasis on constructive feedback, evidence-based findings, and collaborative improvement.
When to use: Reviewing pull requests, providing code feedback, responding to review comments, setting up review workflows, or training reviewers.
For extended examples and detailed workflows: See reference.md
Core Principles
- Collaboration over criticism - Reviews improve code AND build team knowledge
- Evidence-based findings - Concrete examples, not vague suggestions
- Actionable feedback - Specific fixes, not "make it better"
- Context matters - Consider project stage, urgency, team experience
- Automate the boring - Linting/tests run first, humans review substance
- Kind and specific - Respectful tone, clear reasoning
Review Contexts (Choose Your Approach)
| Context | Focus | Depth | Automation |
|---|---|---|---|
| Architecture Review | Design patterns, scalability, maintainability | High-level structure | Manual |
| Detailed Code Review | Logic, edge cases, performance, security | Line-by-line | Hybrid (auto checks + manual) |
| Quick Review | Critical bugs, breaking changes only | Targeted | Mostly automated |
| Mentor Review | Teaching opportunity, explain reasoning | Deep + educational | Manual with explanations |
| Pre-merge Check | Tests pass, no conflicts, standards met | Checklist verification | Fully automated |
Choose based on: PR size, risk level, author experience, project phase
Review Workflow Pattern
Phase 1: Automated Checks (Before Human Review)
ruff format --check . # Code formatting
ruff check . # Linting
pyright . # Type checking
pytest tests/ --cov=src # Tests + coverage
bandit -r src/ # Security scanning
Why first: Machines catch style issues, humans review substance.
Integration: See gitlab-scripts skill for CI/CD patterns.
Phase 2: Human Review Checklist
Step 1: Context Understanding (5 min)
- Read PR description and linked tickets
- Understand WHAT and WHY
- Check scope matches description
Integration: Use gitlab-mr-comments to fetch existing discussions.
Step 2: High-Level Review (10 min)
- Architecture makes sense
- Changes focused, no scope creep
- Backward compatibility maintained
- Dependencies justified
Integration: Use code-refactoring skill for complexity checks.
Step 3: Detailed Code Review (20-40 min)
Functionality:
- Logic correct (happy path + edge cases)
- Error handling appropriate
- No obvious bugs
Security:
- Input validation for external data
- No secrets in code/logs
- Auth/authz correct
Performance:
- No N+1 queries
- Caching appropriate
- Algorithm complexity reasonable
Integration: Use vulnerability-triage for security, pr-review-standards for detailed checks.
Step 4: Tests Review (10 min)
- Tests exist for new code
- Edge cases covered
- Coverage ā„95%
Integration: See testing-standards skill.
Step 5: Documentation (5 min)
- Public APIs documented
- Non-obvious logic explained
- README updated if needed
Providing Constructive Feedback
The Formula
OBSERVATION + EVIDENCE + IMPACT + SUGGESTION = Actionable Review
Feedback Template
**File:** path/to/file.py:LINE
**Issue:** [Specific problem]
**Evidence:**
[Code snippet showing issue]
**Impact:** [What breaks/degrades]
**Fix:**
[Exact code change]
**Severity:** [Critical/High/Medium/Low]
Example: Security Issue
**File:** src/api/auth.py:45
**Issue:** SQL injection vulnerability
**Evidence:**
```python
query = f"SELECT * FROM users WHERE id={user_id}"
Impact: Attacker can inject user_id="1 OR 1=1" to access all users
Fix:
query = "SELECT * FROM users WHERE id=?"
cursor.execute(query, (user_id,))
Severity: Critical - Exploitable vulnerability
**For more examples:** See reference.md (logic bugs, performance, tests)
---
## Feedback Tone Guidelines
### Positive Framing
| Instead of | Use |
|------------|-----|
| "This is wrong" | "Consider this alternative" |
| "You forgot error handling" | "What happens if the API fails?" |
| "This won't work" | "This might have an issue when X happens" |
### Acknowledge Good Work
ā Great extraction of validation logic - much easier to test ā Nice use of caching - will significantly improve performance ā Good error messages with context - debugging will be easier
### Ask Questions (Teaching)
š What's the reasoning for using list instead of set? š Have we considered timeout scenarios? š Could you explain the tradeoff here?
---
## Handling Review Comments (As Author)
### Decision Framework
| Comment Type | When to Accept | When to Push Back |
|--------------|----------------|-------------------|
| **Bugs** | Always | Never |
| **Security** | Always | If false positive with proof |
| **Standards** | Project standards | If standard outdated |
| **Style preferences** | If improves clarity | If subjective |
| **Scope creep** | If critical | If can be follow-up PR |
### Response Patterns
**Accepting:**
ā Fixed in commit abc123. Changed to parameterized query. Good catch - missed the None edge case.
**Pushing back:**
š I considered that but opted for X because Y. Evidence: [proof] Open to discussion if I'm missing something.
**Deferring:**
š Created follow-up ticket JIRA-1234. Keeping current PR focused on original scope.
---
## Code Smells Checklist
### Critical Issues
| Pattern | Evidence | Severity |
|---------|----------|----------|
| **SQL Injection** | `f"SELECT * WHERE id={input}"` | Critical |
| **Auth Bypass** | Missing auth check on endpoint | Critical |
| **Secrets in Code** | Hardcoded API keys/passwords | Critical |
| **Uncaught Exceptions** | Broad try/except hiding errors | High |
### Quality Issues
| Pattern | Evidence | Severity |
|---------|----------|----------|
| **God Function** | 100+ lines, multiple responsibilities | Medium |
| **Magic Numbers** | `if count > 42:` (no explanation) | Low |
| **Commented Code** | Code in comments (use git) | Low |
### Performance Issues
| Pattern | Detection | Calculation |
|---------|-----------|-------------|
| **N+1 Queries** | DB query in loop | N items Ć 1 query |
| **Removed Caching** | Deleted `@lru_cache` | 1 calc ā N calcs |
| **Algorithm Regression** | `O(n)` ā `O(n²)` | 1000 items = 1M ops |
**For detailed patterns:** See code-refactoring skill.
---
## Review Severity Guidelines
| Severity | Examples | Action |
|----------|----------|--------|
| **Critical** | SQL injection, auth bypass, data corruption | MUST fix |
| **High** | Logic bugs, untested critical functions | SHOULD fix |
| **Medium** | Code quality, missing edge cases | Discuss |
| **Low** | Style preferences, optimizations | Optional |
| **Uncertain** | Can't confirm without context | Flag for discussion |
**Integration:** See pr-review-standards for detailed severity matrix.
---
## Automated Checks Integration
### CI Pipeline Pattern
```yaml
# .gitlab-ci.yml
code-quality:
script:
- ruff format --check .
- ruff check .
- pyright .
tests:
script:
- pytest tests/ --cov=src --cov-fail-under=95
security:
script:
- bandit -r src/
- safety check
GitLab MR Integration
# Fetch existing comments
gitlab-mr-comments INT-3877
Use in review: Build on existing discussions, avoid duplication.
Integration: See gitlab-scripts skill for complete workflow.
Review Size Guidelines
| Lines Changed | Review Time | Approach |
|---|---|---|
| 1-50 | 10-15 min | Quick focused review |
| 50-200 | 30-45 min | Detailed review |
| 200-500 | 1-2 hours | Break into chunks |
| 500+ | 2+ hours | Request split |
Large PR strategy: Review architectural overview, then chunks, focus on high-risk areas first.
Common Pitfalls (Avoid These)
Reviewer Pitfalls
ā Nitpicking style (linter's job) ā Vague feedback ("Make better") ā Blocking on preferences ā Ignoring context (prototype ā production) ā No positive feedback
Author Pitfalls
ā Defensive responses ā Ignoring feedback ā Massive PRs ā No context in description ā Breaking CI before review
Quick Reference Templates
Bug Finding
**File:** src/module/file.py:LINE
**Issue:** [Problem]
**Evidence:** [Code snippet]
**Impact:** [What breaks]
**Fix:** [Exact change]
**Severity:** [Level]
Question
š **Question:** src/module/file.py:LINE
[Specific question about approach/tradeoff]
**Context:** [Why asking]
Positive Feedback
ā
**Great pattern:** src/module/file.py:LINE
[What they did well and why]
[Optional: Suggest applying elsewhere]
Integration with Other Skills
| Skill | Use For | Example |
|---|---|---|
| gitlab-scripts | Fetch MR data | gitlab-mr-comments <ticket> |
| code-refactoring | Complexity checks | Flag functions >50 statements |
| vulnerability-triage | Security assessment | Use CVSS scoring |
| pr-review-standards | Detailed standards | try/except rules, logging patterns |
| testing-standards | Coverage requirements | 95%+ unit, 1:1 file mapping |
Review Anti-Patterns Matrix
| Anti-Pattern | Example | Better Approach |
|---|---|---|
| No evidence | "This is bad" | Show code snippet |
| Vague | "Optimize this" | "N+1 query - use batch (see code)" |
| Style blocking | Block on 81-char line | Configure linter |
| Theoretical | "Could maybe fail if..." | Only flag actual issues |
| Scope creep | "Also add feature X" | Separate PR |
Summary
Review Phases
- Automated - Linting, tests, security (CI/CD)
- Context - Read description, tickets, existing comments
- High-level - Architecture, scope, breaking changes
- Detailed - Functionality, security, performance, tests
- Documentation - Comments, README, migration guides
The Golden Rules
- Be kind and specific - Respectful, concrete examples
- Evidence-based - Show code, don't just claim
- Actionable - Exact fixes, not vague
- Context-aware - Prototype ā production
- Automate boring - Linters catch style
- Acknowledge good - Positive reinforcement
The Test
Can author act on feedback immediately? If not, be more specific.
Would you want this feedback? If not, reframe it.
Remember: Reviews improve code AND build team knowledge. Collaboration over criticism.
For detailed examples, workflows, and troubleshooting: See reference.md
Related skills:
- pr-review-standards (detailed code quality standards)
- gitlab-scripts (MR integration)
- code-refactoring (complexity patterns)
- vulnerability-triage (security assessment)
- testing-standards (coverage requirements)