| name | automated-code-fixer |
| description | Automated IT helper for detecting and fixing code issues. Use when code fails tests, linting, type-checking, or has security vulnerabilities. Enforces strict quality gates before accepting fixes. |
Automated Code Fixer
An intelligent "IT guy" skill that automatically detects and fixes code issues with strict quality controls.
When This Skill Activates
- Test failures detected
- Linting errors reported
- Type-checking errors found
- Security vulnerabilities identified
- Build failures encountered
- CI/CD pipeline failures
Strict Quality Gates
CRITICAL: All fixes must pass these gates before being accepted:
Gate 1: Test Validation
- All existing tests must pass after fix
- New code must have corresponding tests
- Coverage cannot decrease below 70%
Gate 2: Linting Compliance
- Black formatting must pass
- Ruff linting with zero errors
- No new security warnings
Gate 3: Type Safety
- mypy type-checking must pass
- No untyped function signatures
- No
Anytype escapes
Gate 4: Security Check
- No hardcoded secrets
- No SQL injection vulnerabilities
- No path traversal issues
- Input validation on all user data
Gate 5: Architectural Compliance
- Follow layered architecture (Route -> Controller -> Service -> Model)
- Database changes require migrations
- Async/await for all DB operations
Fix Process
Step 1: Diagnose
# Run diagnostics
cd /home/user/Autonomous-Assignment-Program-Manager/backend
pytest --tb=short 2>&1 | head -50
ruff check app/ tests/
mypy app/ --python-version 3.11
Step 2: Analyze
- Identify root cause (not just symptoms)
- Check related files for context
- Review test expectations
Step 3: Fix
- Make minimal, focused changes
- Follow existing code patterns
- Add comments only where logic isn't self-evident
Step 4: Validate
# Must all pass before accepting fix
pytest --tb=short
ruff check app/ tests/
black --check app/ tests/
mypy app/ --python-version 3.11
Step 5: Report
Provide concise summary:
- What was broken
- Root cause
- What was fixed
- Tests added/modified
Escalation Rules
Escalate to human when:
- Fix requires changing models/migrations
- Fix affects ACGME compliance logic
- Fix touches authentication/security code
- Multiple interdependent failures
- Unclear requirements or business logic
- Fix would take >30 minutes
Rollback Protocol
If fix causes additional failures:
- Immediately revert changes
- Document what went wrong
- Escalate to human with full context
Integration with Existing Commands
This skill works with the project's slash commands:
/run-tests- Execute test suite/lint-fix- Auto-format and fix linting/health-check- System health validation/check-compliance- ACGME compliance verification
Example Fixes
Test Failure
pytest output: FAILED tests/test_swap_executor.py::test_execute_swap - AssertionError
Diagnosis: SwapExecutor.execute_swap() not awaiting async call
Fix: Added 'await' to database query on line 47
Validation: All tests pass, coverage maintained at 78%
Type Error
mypy output: app/services/schedule.py:23: error: Missing return type annotation
Diagnosis: Function missing type hints
Fix: Added -> Optional[Schedule] return type
Validation: mypy passes, no runtime changes
Security Issue
ruff output: S105 Possible hardcoded password in variable assignment
Diagnosis: Test file using hardcoded password
Fix: Replaced with environment variable via conftest fixture
Validation: No security warnings, tests pass
References
- See
reference.mdfor detailed fix patterns - See
examples.mdfor common fix scenarios