| name | refactor |
| description | Analyse Python code for refactoring opportunities using functional programming principles, code quality checks, and manual refactoring guidance. Use when user wants to improve code quality, check for code smells, or refactor to functional style. |
Python Code Refactoring Analyser
When the user wants to refactor code, improve code quality, or check for code smells:
1. Introduction
This skill provides comprehensive refactoring guidance for Python code, focusing on:
- Functional programming patterns - Detect non-functional code (mutations, side effects, classes vs functions)
- Code quality checks - Automated analysis using pylint, black, and mypy
- Manual refactoring guidance - Identify code smells and suggest improvements
- Configuration extraction - Find hardcoded values that should be extracted to constants or config files
The skill combines automated tool-based analysis with manual pattern detection to provide actionable refactoring suggestions.
2. When to Use
Activate this skill when the user:
- Requests code review or refactoring
- Wants to improve code quality before committing
- Needs to convert object-oriented code to functional style
- Asks to check for code smells or antipatterns
- Wants periodic code quality checks
- Is preparing code for testing (making it more testable)
3. Quick Start
Execute the Python script located in the same directory as this skill:
Auto-detect mode (analyses all Python files in current directory):
python3 skills/refactor/refactor.py
Specific file mode:
python3 skills/refactor/refactor.py path/to/file.py
Full project scan (analyses all .py files recursively):
python3 skills/refactor/refactor.py --all
The script works immediately with no dependencies (uses Python stdlib). Optional tools (pylint, black, mypy) provide enhanced analysis if installed.
4. What It Checks
Functional Programming Patterns (AST-based)
- Mutations: Detects
list.append(),dict[key] = value,items[0] = x - Class usage: Flags classes when functions + dataclasses would suffice
- Global state: Identifies global variable modifications
- Side effects: Finds
print(), file I/O, network calls in business logic functions - Impure functions: Detects functions that modify state or have side effects
Code Quality (via pylint)
- PEP 8 style compliance
- Naming conventions
- Code complexity metrics
- Unused imports and variables
- Missing docstrings
- Potential bugs
Code Formatting (via black)
- Line length consistency
- Indentation style
- Quote style
- Whitespace formatting
Type Safety (via mypy)
- Type hint coverage
- Type correctness
- Return type consistency
- Argument type validation
Manual Code Smell Detection
- Functions longer than 50 lines
- Nesting depth greater than 3 levels
- Magic numbers and strings (hardcoded values)
- Missing or inadequate docstrings
- Repeated code patterns (DRY violations)
Configuration Extraction
- Hardcoded file paths, URLs, API keys
- Environment-specific values
- Magic constants that should be named
- Format strings and patterns
5. Output Interpretation
The script produces a structured report with issues grouped by severity and category:
Severity Levels
- CRITICAL: Must fix - violates core principles or will cause bugs
- WARNING: Should fix - reduces code quality or maintainability
- INFO: Consider fixing - minor improvements or suggestions
Example Output
Refactoring Analysis: skills/anonymise/anonymise.py
================================================================
SUMMARY: 5 issues found
Critical: 0
Warning: 3
Info: 2
FUNCTIONAL PATTERNS (2 warnings)
Line 52: Side effect detected in function 'anonymise_csv'
Suggestion: Extract file I/O to separate function, keep core logic pure
Line 70: Mutation detected - modifying list in place
Suggestion: Use list comprehension or functional transformation
CODE QUALITY (1 warning)
[pylint] Line 54: Consider using 'with' statement for resource cleanup
Score: 9.2/10
INFO (2)
[config] Line 50: Hardcoded datetime format string
Suggestion: Extract to constant TIMESTAMP_FORMAT = "%Y%m%d_%H%M%S"
[code smell] Function 'main' is 45 lines - approaching complexity threshold
Suggestion: Consider extracting argument parsing to separate function
Checks run: AST analysis, pylint, black, mypy
Tools available: pylint ✓, black ✓, mypy ✓
6. Tool Integration
First-Run Setup
The script automatically checks for optional analysis tools. If tools are not installed:
Optional tools not found:
- pylint: pip install pylint
- black: pip install black
- mypy: pip install mypy
Install all: pip install pylint black mypy
Continuing with stdlib-only analysis...
Note: Core functional pattern analysis and code smell detection work without any external dependencies.
Configuration Files
On first run, the script can generate configuration files if they don't exist:
pyproject.toml - Configures black, mypy, and pylint:
- Line length: 88 characters (black default)
- Python version: 3.8+
- Type checking: Strict mode
- Functional-programming-friendly linter rules
You can customise these configs for project-specific requirements.
Tool Execution
- Graceful degradation: Skips unavailable tools without errors
- Unified output: Parses all tool outputs into consistent Issue format
- Performance: Caches results during analysis for speed
7. Manual Refactoring Guidance
Extracting Configuration
Before (hardcoded values):
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_file = f"{timestamp}.csv"
After (extracted constants):
TIMESTAMP_FORMAT = "%Y%m%d_%H%M%S"
CSV_EXTENSION = ".csv"
timestamp = datetime.now().strftime(TIMESTAMP_FORMAT)
output_file = f"{timestamp}{CSV_EXTENSION}"
Converting to Functional Style
Before (mutation-based):
def process_items(items):
results = []
for item in items:
if item > 0:
results.append(item * 2)
return results
After (functional):
def process_items(items: list[int]) -> list[int]:
return [item * 2 for item in items if item > 0]
Extracting Pure Functions
Before (side effects mixed with logic):
def anonymise_csv(input_file: str) -> str:
with open(input_file, 'r') as f:
rows = list(csv.reader(f))
anonymised = [row[1:] for row in rows]
with open(output_file, 'w') as f:
csv.writer(f).writerows(anonymised)
return output_file
After (pure core, impure shell):
def read_csv(file_path: str) -> list[list[str]]:
"""Read CSV file (impure I/O)."""
with open(file_path, 'r') as f:
return list(csv.reader(f))
def anonymise_rows(rows: list[list[str]]) -> list[list[str]]:
"""Remove first column from all rows (pure function)."""
return [row[1:] for row in rows]
def write_csv(file_path: str, rows: list[list[str]]) -> None:
"""Write CSV file (impure I/O)."""
with open(file_path, 'w') as f:
csv.writer(f).writerows(rows)
def anonymise_csv(input_file: str, output_file: str) -> str:
"""Orchestrate anonymisation (impure shell)."""
rows = read_csv(input_file)
anonymised = anonymise_rows(rows)
write_csv(output_file, anonymised)
return output_file
Function Decomposition
When to break down functions:
- Function exceeds 50 lines
- Cyclomatic complexity > 10
- Multiple levels of nesting (> 3)
- Mixing different levels of abstraction
- Repeated code blocks
How to decompose:
- Identify distinct responsibilities
- Extract each responsibility to a named function
- Use descriptive function names
- Pass dependencies explicitly (no hidden state)
- Prefer pure functions where possible
8. Notes
- British English: All output uses British spelling (analyse, optimise, behaviour, etc.)
- Non-destructive: Analysis only - no automatic code modifications
- Platform-agnostic: Works with any AI coding assistant (Claude Code, Cursor, Aider, Codex CLI, etc.)
- Minimal dependencies: Core analysis uses stdlib only; external tools are optional enhancements
- Testability focus: Suggestions prioritise making code easier to test with pytest
- Functional-first: Recommendations align with project preference for functional programming idioms
- Standards compliance: Checks against AGENTS.md conventions (PEP 8, type hints, British English, functional style)