| name | unit-test-generator |
| description | Automatically generates comprehensive unit tests for Python, TypeScript, and JavaScript codebases. Analyzes function signatures, docstrings, and implementation patterns to produce tests covering happy paths, edge cases, error conditions, and boundary values. Supports pytest, Jest, and Vitest. |
| license | MIT |
| allowed-tools | Bash, Read, Write, Edit, Glob, Grep |
| compatibility | [object Object] |
| metadata | [object Object] |
Unit Test Generator
Purpose
Automatically generates comprehensive unit tests for Python, TypeScript, and JavaScript codebases. Analyzes function signatures, docstrings, and implementation patterns to produce tests covering happy paths, edge cases, error conditions, and boundary values.
Triggers
Use this skill when:
- "generate unit tests"
- "create test coverage"
- "write tests for this code"
- "add unit tests"
- "test this function"
- "improve test coverage"
When to Use
- New code without tests
- Low coverage reports
- Refactoring with regression risk
- Code review requirements
- Bootstrapping test infrastructure
When NOT to Use
- Integration testing needed (use api-contract-validator)
- Performance testing needed (use performance-benchmark)
- Testing external APIs (use integration tests)
- Security testing (use security-test-suite)
Core Instructions
Test Case Generation Strategy
| Test Type | Description | Priority |
|---|---|---|
| Happy Path | Expected inputs -> expected outputs | P0 |
| Null/Empty | None, null, "", [], {} | P0 |
| Boundary | Min, max, off-by-one | P1 |
| Type Error | Wrong types, coercion | P1 |
| Exception | Error conditions | P1 |
| Async | Promise rejection, timeout | P2 |
Analysis Procedure
Function Discovery Phase
- Parse source files using AST
- Identify function/method signatures
- Extract type hints and defaults
- Parse docstrings for expected behavior
Test Case Design Phase
- Generate happy path test cases
- Create boundary value tests for numeric types
- Add null/empty input tests
- Design exception tests from documented raises
Code Generation Phase
- Select appropriate test framework
- Apply naming conventions
- Generate assertion statements
- Add documentation comments
Framework Support
| Language | Frameworks |
|---|---|
| Python | pytest (recommended), unittest |
| JavaScript | Jest (recommended), Vitest, Mocha |
| TypeScript | Jest, Vitest |
Templates
Generated Test File (Python/pytest)
"""
Unit tests for {module_name}
Auto-generated by unit-test-generator skill.
Review and adjust expected values before running.
"""
import pytest
from {module} import {function_name}
class Test{FunctionName}:
"""Test suite for {function_name} function."""
def test_{function_name}_happy_path(self):
"""Test {function_name} with typical valid inputs."""
result = {function_name}({typical_args})
assert result == {expected_output}
@pytest.mark.parametrize("input_val,expected", [
(0, {zero_expected}),
(-1, {negative_expected}),
({max_val}, {max_expected}),
])
def test_{function_name}_boundaries(self, input_val, expected):
"""Test {function_name} at boundary values."""
result = {function_name}(input_val)
assert result == expected
def test_{function_name}_invalid_input_raises(self):
"""Test that {function_name} raises {exception} for invalid input."""
with pytest.raises({exception}):
{function_name}({invalid_args})
Generated Test File (TypeScript/Jest)
import { {functionName} } from './{module}';
describe('{functionName}', () => {
it('should return expected output for valid inputs', () => {
const result = {functionName}({typicalArgs});
expect(result).toBe({expectedOutput});
});
it.each([
[0, {zeroExpected}],
[-1, {negativeExpected}],
[{maxVal}, {maxExpected}],
])('should handle boundary value %s', (input, expected) => {
const result = {functionName}(input);
expect(result).toBe(expected);
});
it('should throw for invalid input', () => {
expect(() => {functionName}({invalidArgs})).toThrow();
});
});
Example
Input: Generate tests for this function:
def calculate_margin(revenue: float, cost: float, fee_rate: float = 0.02) -> float:
"""
Calculate profit margin after fees.
Raises:
ValueError: If revenue or cost is negative
"""
if revenue < 0 or cost < 0:
raise ValueError("Revenue and cost must be non-negative")
return revenue - cost - (revenue * fee_rate)
Output:
import pytest
from module import calculate_margin
class TestCalculateMargin:
def test_calculate_margin_happy_path(self):
result = calculate_margin(revenue=100.0, cost=60.0, fee_rate=0.02)
assert result == 38.0
def test_calculate_margin_zero_cost(self):
result = calculate_margin(revenue=100.0, cost=0.0)
assert result == 98.0
@pytest.mark.parametrize("revenue,cost,expected", [
(0.0, 0.0, 0.0),
(0.001, 0.0, 0.00098),
])
def test_calculate_margin_boundaries(self, revenue, cost, expected):
result = calculate_margin(revenue=revenue, cost=cost)
assert abs(result - expected) < 1e-10
def test_calculate_margin_negative_revenue_raises(self):
with pytest.raises(ValueError, match="must be non-negative"):
calculate_margin(revenue=-10.0, cost=5.0)
Validation Checklist
Before completing test generation, verify:
- All public functions have at least one test
- Happy path tests cover expected behavior
- Boundary conditions tested for numeric parameters
- Exception tests match documented raises
- Import statements are correct
- Test names follow naming convention
- Parametrized tests used where appropriate
- Generated tests pass initial run (syntax check)
Related Skills
api-contract-validator- For API integration teststest-health-monitor- For test suite analysisdata-validation- For data quality tests