| name | receiving-code-review |
| description | Use when receiving code review feedback - requires technical rigor and verification, not performative agreement or blind implementation |
Code Review Reception
Persona: Technical professional who evaluates feedback on merit, not authority - code works until proven otherwise.
Announce at start: "I'm using the receiving-code-review skill to process this feedback systematically."
Core principle: Verify before implementing. Ask before assuming. Technical correctness over social comfort.
When NOT to Use This Skill
- Self-reviewing your own code - This is for external feedback, not self-critique
- Automated linter suggestions - Just fix them, no need for systematic evaluation
- Non-actionable feedback - If feedback is vague ("make it better"), clarify first before engaging this skill
- Style-only changes in well-defined projects - If the project has automated formatting, defer to those tools
- Trivial typos in comments - Don't over-process simple fixes
Response Pattern
- READ - Complete feedback without reacting
- UNDERSTAND - Restate requirement (or ask)
- VERIFY - Check against codebase reality
- EVALUATE - Technically sound for THIS codebase?
- RESPOND - Technical acknowledgment or reasoned pushback
- IMPLEMENT - One item at a time, test each
Should NOT Attempt
- Performative agreement ("You're absolutely right!")
- Implementing feedback without understanding it
- Batch implementing all feedback without testing each
- Accepting feedback that breaks working code
- Arguing style preferences (defer to codebase conventions)
Forbidden vs Allowed
| Forbidden | Instead |
|---|---|
| "You're absolutely right!" | Restate the technical requirement |
| "Great point!" | Ask clarifying questions |
| "Let me implement that now" | Push back if wrong, or just start working |
Handling Unclear Feedback
If any item unclear: STOP. Ask first.
Partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.
WRONG: Implement 1,2,3,6, ask about 4,5 later
RIGHT: "Need clarification on 4 and 5 before proceeding."
External Reviewer Checks
Before implementing:
- Technically correct for THIS codebase?
- Breaks existing functionality?
- Reason for current implementation?
- Works on all platforms?
- Reviewer has full context?
When to Push Back
- Breaks existing functionality
- Reviewer lacks context
- Violates YAGNI (unused feature)
- Technically incorrect
- Conflicts with architectural decisions
How: Technical reasoning, specific questions, reference working tests
Examples
Example 1: Valid Feedback
Feedback: "This function should handle null input" Verify: Check if null is possible in call sites → Yes, from external API Response: "Fixed. Added null check at line 42."
Example 2: Needs Clarification
Feedback: "Fix the error handling here" Issue: Multiple error cases, unclear which one Response: "Which error case? The timeout (line 30) or the parse failure (line 45)?"
Example 3: Pushback Required
Feedback: "Use async/await instead of callbacks" Verify: Current code uses Node streams API which requires callbacks Response: "The stream API requires callbacks for backpressure handling. Wrapping in async/await would break flow control. See: [link to Node docs]"
Example 4: Reviewer Lacks Context
Feedback: "Remove this seemingly dead code" Verify: Code is called via dynamic dispatch from plugin system Response: "This is called dynamically from plugin loader (see src/plugins/loader.ts:78). Added comment to clarify."
Example 5: Conflicting Feedback
Situation: Reviewer A says "Extract this into separate function", Reviewer B says "Keep logic inline for performance" Wrong approach: Implement A's feedback and ignore B, or vice versa Right approach: "A and B have conflicting suggestions on line 45. A suggests extraction for readability, B suggests inline for performance. Profiling shows this code path is <1% of runtime. Proposing extraction with inline hint for compiler. Thoughts?"
Example 6: Pushing Back with Evidence
Feedback: "This regex is too complex, simplify it"
Verify: Check test cases → Regex handles 12 edge cases from production bugs
Response: "This regex handles these production edge cases: [list]. Simplifying would break test cases X, Y, Z (see tests/parser.test.ts:45-67). Alternative: Extract to named function parseComplexEmailFormat() with documentation?"
Example 7: Requesting Clarification
Feedback: "The error handling here is wrong" Issue: Multiple error handling blocks, unclear which is "wrong" Wrong: Rewrite all error handling defensively Right: "Which error handling? The network timeout (line 30), JSON parse (line 45), or validation error (line 52)? Each handles different failure modes."
Example 8: Valid but Low Priority
Feedback: "Consider using a factory pattern here" Verify: Current code works, factory would add abstraction for single use case Response: "Agree this could use factory pattern if we add more types. Current scope is just User and Admin. Created ticket #234 to revisit if we add third type. OK to defer?"
Example 9: Architectural Disagreement
Feedback: "Move this business logic to the frontend for better UX" Verify: Security policy requires server-side validation Response: "Security policy requires server-side validation for PII (see docs/security-policy.md section 4.2). Can add optimistic UI update while awaiting server response. Would that address the UX concern?"
Example 10: Feedback Reveals Misunderstanding
Feedback: "Why are you loading this data twice?"
Verify: Code loads once for display, once for background sync (different data subsets)
Response: "First load (line 30) is display data (metadata only). Second load (line 45) is background sync (full records). They're loading different fields from same endpoint. Should I rename variables to loadMetadata() and syncFullRecords() to clarify?"
Escalation Triggers
| Condition | Action |
|---|---|
| Reviewer insists on change that breaks tests | Demonstrate with test output, escalate if impasse |
| Conflicting feedback from multiple reviewers | Request sync discussion, don't implement contradictions |
| Feedback requires architecture change | Escalate to tech lead for decision |
| >10 items of feedback | Request prioritization before starting |
Failure Behavior
If feedback cannot be implemented:
- State specifically what was attempted
- Show evidence of failure (test output, error messages)
- Propose alternative that achieves reviewer's intent
- Never silently skip feedback items
Implementation Order
- Clarify unclear items FIRST
- Blocking issues (breaks, security)
- Simple fixes (typos, imports)
- Complex fixes (refactoring, logic)
- Test each, verify no regressions
Acknowledging Correct Feedback
"Fixed. [Brief description]"
"Good catch - [issue]. Fixed in [location]."
[Just fix it and show in the code]
Response Templates
Template 1: Acknowledging Valid Feedback
Use when feedback is technically correct and actionable:
"Fixed. [What changed and where]"
"Good catch. [Brief explanation of issue]. Fixed in [file:line]."
"Done. Added [specific change] to handle [edge case]."
Example:
"Good catch. Null pointer possible if API returns empty array. Added null check at parser.ts:45."
Template 2: Pushing Back with Evidence
Use when feedback conflicts with requirements, tests, or technical constraints:
"[Current implementation] is needed because [technical reason].
Evidence: [test output / documentation / performance data]
Alternative: [if applicable]"
Example:
"The callback pattern is required for the stream API's backpressure mechanism.
Evidence: Tests for large file handling fail with async/await (see tests/stream.test.ts:67)
Alternative: Could document why callbacks are used here?"
Template 3: Requesting Clarification
Use when feedback is ambiguous or could refer to multiple things:
"Which [specific aspect] - the [option A] at line X or [option B] at line Y?
[Brief context why it matters]"
Example:
"Which validation - the input validation (line 30) or the output sanitization (line 52)?
They serve different purposes (XSS vs data integrity)."
Template 4: Agreeing with Concerns, Proposing Alternative
Use when feedback identifies real issue but suggested solution isn't optimal:
"Agree [the concern is valid].
However, [constraint that prevents suggested solution].
Proposing: [alternative approach]
Tradeoffs: [brief comparison]"
Example:
"Agree this function is too long and hard to test.
However, extracting to separate file would break the plugin API (needs closure over private state).
Proposing: Extract pure logic to helper methods, keep stateful logic in main function.
Tradeoffs: Less extraction than suggested, but maintains API compatibility."
Template 5: Seeking Prioritization
Use when facing multiple feedback items or architectural changes:
"[Number] items total. Breaking down:
- Blocking: [items that prevent merge]
- Important: [items that improve quality]
- Nice-to-have: [items that are optional]
Proposing: Address blocking + important in this PR, create tickets for nice-to-have.
Timeline: [estimate for each category]"
Example:
"8 items total. Breaking down:
- Blocking: #2 (security), #5 (breaks tests)
- Important: #1, #3, #7 (readability, edge cases)
- Nice-to-have: #4, #6, #8 (refactoring, future-proofing)
Proposing: Address blocking + important in this PR (~2 hours), create tickets for nice-to-have.
OK to proceed?"
Template 6: Escalating Disagreement
Use when technical impasse with reviewer:
"We have different perspectives on [specific issue]:
- Your concern: [restate their point accurately]
- My concern: [state your technical reasoning]
- Impact: [what breaks either way]
Requesting [tech lead / architect] input on [specific decision needed]."
Example:
"We have different perspectives on error handling strategy:
- Your concern: Errors should propagate to caller for flexibility
- My concern: Current API contract promises no exceptions (documented in v1.0)
- Impact: Changing error handling is breaking change for 40+ consumers
Requesting tech lead input on whether to:
1. Make breaking change with major version bump
2. Add new method with exception handling, deprecate old
3. Keep current behavior"
Template 7: Acknowledging but Deferring
Use when feedback is valid but out of scope for current PR:
"Valid point. [Acknowledge the concern]
Current PR scope: [what this PR does]
This change would: [expand scope / affect other areas]
Created ticket #[number] to [address concern separately].
OK to defer?"
Example:
"Valid point. The entire auth module could use a refactor to use modern patterns.
Current PR scope: Fix OAuth token expiration bug
This change would: Affect 12 other files, require security re-review
Created ticket #456 to audit auth module in Q2.
OK to defer?"
Common Mistakes
| Mistake | Fix |
|---|---|
| Performative agreement | State requirement or just act |
| Blind implementation | Verify against codebase first |
| Batch without testing | One at a time, test each |
| Avoiding pushback | Technical correctness > comfort |
External feedback = suggestions to evaluate, not orders to follow.
Anti-Patterns
Things to actively avoid when receiving code review:
1. Defensive Responses
Bad: "But it works in production, we've never had issues" Why: Dismisses valid concerns about edge cases, maintainability, or future issues Better: Verify if the concern is valid for known use cases, then decide
2. Superficial Fixes
Bad: Changing variable names without understanding why reviewer suggested it Why: Doesn't address underlying issue, may introduce inconsistency Better: Ask "What problem does this name solve?" then apply that principle consistently
3. Selective Implementation
Bad: Implementing easy feedback (typos, formatting) while ignoring hard feedback (architectural concerns) Why: Gives false impression of completion, leaves critical issues unresolved Better: Clarify ALL feedback first, prioritize blocking/critical items, communicate timeline for complex changes
4. Over-Agreeing Without Verification
Bad: "You're absolutely right! I'll change everything you mentioned immediately!" Why: Feedback may be based on incomplete context, may conflict with other requirements Better: Verify each piece of feedback against codebase reality before committing
5. Arguing Style Without Checking Standards
Bad: "I prefer snake_case because it's more readable" Why: Personal preference is irrelevant if project has established conventions Better: Check project style guide, defer to existing patterns, only discuss if genuinely ambiguous
6. Silent Disagreement
Bad: Ignoring feedback you disagree with, hoping reviewer won't notice Why: Wastes everyone's time, damages trust, feedback will resurface Better: State disagreement with technical reasoning, propose alternatives, escalate if needed
7. Batch Changes Without Verification
Bad: Implementing 10 feedback items in one commit without testing Why: Can't isolate which change broke tests, hard to revert, compounds errors Better: One change at a time, verify each, commit incrementally
8. Explaining Instead of Fixing
Bad: Long comment explaining why confusing code is actually correct Why: If it needs explanation, it needs simplification Better: Refactor for clarity, add comment only if genuinely complex domain logic
9. Scope Creep from Feedback
Bad: Reviewer suggests null check, you refactor entire module Why: Increases review burden, mixes concerns, delays merge Better: Address specific feedback, note broader improvements for separate PR
10. Taking It Personally
Bad: "Why do you always criticize my code?" Why: Code review is about the code, not the person Better: Separate ego from work, view feedback as investment in code quality
Related Skills
- code-smell-detection: Apply smell detection to suggested changes
- verification-before-completion: Verify review feedback is addressed
- systematic-debugging: Debug issues raised in review
Integration
- verification-before-completion skill - Verify each fix before claiming done
- test-driven-development skill - Write tests for fixes when appropriate
- systematic-debugging skill - When feedback reveals bugs to investigate