| name | code-review |
| description | Automated code review using external AI tools (codex and/or gemini-cli). Use this skill after writing or editing code to get a second opinion from other AI models, then implement their recommendations with user approval. |
Automated Code Review Skill
This skill performs automated code reviews using external AI tools (OpenAI Codex CLI and/or Google Gemini CLI) to provide a second perspective on code you've written.
When to Use This Skill
Invoke this skill after you have:
- Written new code files
- Made significant edits to existing code
- Completed a feature implementation
- Fixed a bug and want validation
Workflow Overview
- Identify Code to Review - Determine which files were recently written or modified
- Run External Reviews - Call codex and/or gemini-cli to analyze the code
- Collect Recommendations - Parse and organize the feedback
- Present to User - Show recommendations with clear explanations
- Implement with Approval - Make changes only after user confirms
Step-by-Step Instructions
Step 1: Identify Files to Review
First, identify the files that need review. You can:
- Use the files you just wrote/edited in the current session
- Ask the user which specific files to review
- Use
git diff --name-onlyto find recently changed files
Step 2: Prepare Review Context
For each file to review, gather:
- The full file content
- The purpose/context of the code
- Any specific areas of concern
Step 3: Run External Reviews
Use the Bash tool to call the external review tools. Always check if the tools are available first.
Option A: Review with Codex CLI
# Check if codex is available
which codex || echo "codex not found - install with: npm install -g @openai/codex"
# Run codex review
codex "Review this code for bugs, security issues, performance problems, and best practices violations. Provide specific, actionable recommendations:\n\n$(cat FILE_PATH)"
Option B: Review with Gemini CLI
# Check if gemini is available
which gemini || echo "gemini not found - install Google's gemini-cli"
# Run gemini review
gemini "Review this code for bugs, security issues, performance problems, and best practices violations. Provide specific, actionable recommendations:\n\n$(cat FILE_PATH)"
Option C: Run Both (Recommended)
Run both tools in parallel for comprehensive feedback:
# Run both reviews in parallel
codex "Review this code..." &
gemini "Review this code..." &
wait
Step 4: Parse and Organize Recommendations
After receiving feedback from the external tools:
Categorize recommendations by type:
- 🔴 Critical: Security vulnerabilities, bugs that cause crashes
- 🟠Important: Performance issues, potential bugs
- 🟡 Moderate: Code style, maintainability concerns
- 🟢 Minor: Suggestions, optimizations
Deduplicate if using multiple tools - combine similar recommendations
Prioritize by impact and effort
Step 5: Present Recommendations to User
Format the recommendations clearly:
## Code Review Results
### Files Reviewed
- `path/to/file1.js`
- `path/to/file2.py`
### Recommendations
#### 🔴 Critical Issues (Must Fix)
1. **[Security] SQL Injection Vulnerability** (file.js:42)
- Issue: User input directly concatenated into SQL query
- Recommendation: Use parameterized queries
- Suggested by: Codex, Gemini
#### 🟠Important Issues
1. **[Performance] N+1 Query Problem** (file.py:78)
- Issue: Database query inside loop
- Recommendation: Use eager loading or batch queries
- Suggested by: Gemini
#### 🟡 Moderate Issues
...
### Summary
- Critical: 1
- Important: 2
- Moderate: 3
- Minor: 5
Step 6: Get User Approval
IMPORTANT: Before implementing any changes, ask the user for approval:
Would you like me to implement these recommendations?
Options:
1. **Implement all** - Fix all issues automatically
2. **Implement critical only** - Only fix critical and important issues
3. **Review individually** - Go through each recommendation one by one
4. **Skip** - Don't implement any changes
Please choose an option (1-4) or specify which recommendations to implement.
Step 7: Implement Approved Changes
For each approved recommendation:
- Explain what you're about to change
- Make the edit using the Edit tool
- Verify the change doesn't break anything
- Report completion
After all changes:
- Run any relevant tests
- Provide a summary of changes made
Review Prompts for External Tools
Comprehensive Review Prompt
Review the following code for:
1. **Security Issues**
- Injection vulnerabilities (SQL, XSS, command injection)
- Authentication/authorization flaws
- Sensitive data exposure
- Insecure dependencies
2. **Bugs and Logic Errors**
- Off-by-one errors
- Null/undefined handling
- Race conditions
- Edge cases
3. **Performance Problems**
- Inefficient algorithms
- Memory leaks
- Unnecessary computations
- Database query issues
4. **Code Quality**
- DRY violations
- SOLID principles
- Error handling
- Code clarity
5. **Best Practices**
- Language-specific idioms
- Framework conventions
- Testing considerations
For each issue found, provide:
- Location (file and line number if possible)
- Description of the problem
- Severity (Critical/Important/Moderate/Minor)
- Specific fix recommendation with code example
Code to review:
Security-Focused Prompt
Perform a security audit of this code. Focus on:
- OWASP Top 10 vulnerabilities
- Authentication and session management
- Input validation and sanitization
- Cryptographic issues
- Access control problems
Provide specific remediation steps for each issue found.
Code:
Performance-Focused Prompt
Analyze this code for performance issues:
- Time complexity concerns
- Memory usage patterns
- I/O bottlenecks
- Caching opportunities
- Database query optimization
Suggest specific optimizations with expected improvements.
Code:
Configuration Options
Users can customize behavior by setting environment variables:
CODE_REVIEW_TOOLS: Which tools to use (codex,gemini, orboth)CODE_REVIEW_SEVERITY: Minimum severity to report (critical,important,moderate,minor)CODE_REVIEW_AUTO_IMPLEMENT: Auto-implement certain severities (none,critical,important)
Error Handling
Tool Not Found
If a review tool is not installed:
The {tool} CLI is not installed. Would you like to:
1. Continue with only {other_tool}
2. Skip the automated review
3. Get installation instructions
Tool Timeout
If a review takes too long (>60 seconds):
The review is taking longer than expected. Would you like to:
1. Wait longer
2. Cancel and proceed without this review
Empty Response
If a tool returns no recommendations:
{tool} found no issues with the reviewed code. This could mean:
- The code follows best practices
- The tool couldn't analyze this language/framework
- The context was insufficient
Consider: Running another review tool or manual inspection.
Example Usage
Example 1: Review After Writing New File
User: Create a user authentication module
Claude: [Creates auth.js with login/logout functions]
Claude: I've created the authentication module. Let me run an automated code review to ensure it's secure and follows best practices.
[Invokes code-review skill]
Claude: Here are the review results from Codex and Gemini:
## Critical Issues
1. Password stored in plain text (auth.js:23)
- Must hash passwords using bcrypt
## Important Issues
1. No rate limiting on login attempts (auth.js:45)
- Add rate limiting to prevent brute force
Would you like me to implement these fixes?
Example 2: Review Specific Files
User: Review the API endpoints I wrote yesterday
Claude: I'll review the API endpoint files. Let me identify the relevant files first.
[Finds api/routes/*.js files]
Claude: Found 3 API route files. Running automated review...
[Invokes code-review skill with those files]
Notes
- Always respect user preferences for which tools to use
- Some code may not be suitable for external review (sensitive/proprietary)
- External tools have their own rate limits and quotas
- Reviews are suggestions - use judgment when implementing