| name | test-generator |
| description | Generate comprehensive test boilerplate following project TDD conventions. Use when creating new tests for modules, classes, or functions. Automatically creates tests with proper structure, type hints, docstrings, and AAA pattern. |
Test Boilerplate Generator
⚠️ MANDATORY: Read Project Documentation First
BEFORE generating tests, you MUST read and understand the following project documentation:
Core Project Documentation
- README.md - Project overview, features, and getting started
- AI_DOCS/project-context.md - Tech stack, architecture, development workflow
- AI_DOCS/code-conventions.md - Code style, formatting, best practices (especially for tests)
- AI_DOCS/tdd-workflow.md - TDD process, testing standards, coverage requirements (CRITICAL)
Session Context (if available)
- .ai-context/ACTIVE_TASKS.md - Current tasks and priorities
- .ai-context/CONVENTIONS.md - Project-specific conventions
- .ai-context/RECENT_DECISIONS.md - Recent architectural decisions
- .ai-context/LAST_SESSION_SUMMARY.md - Previous session summary
Additional AI Documentation
- AI_DOCS/ai-tools.md - Session management workflow
- AI_DOCS/ai-skills.md - Other specialized skills/agents available
Why This Matters
- Testing Standards: Follow project-specific test structure and conventions
- TDD Patterns: Use AAA pattern, proper fixtures, and parametrization
- Code Quality: Generate tests with type hints, docstrings, proper imports
- Mock Guidelines: Understand when to mock vs. use real code
- Coverage Goals: Generate tests that help reach 80%+ coverage
After reading these files, proceed with your test generation task below.
Overview
Automatically generate comprehensive test boilerplate that follows this project's strict TDD and quality standards.
When to Use
- Creating tests for a new module, class, or function
- Starting TDD workflow (write tests FIRST!)
- Need test templates with proper structure
- Want parametrized test examples
- Creating test fixtures
What This Skill Generates
✅ Test files with proper imports and structure ✅ Test classes organized by functionality ✅ Parametrized test templates ✅ AAA pattern (Arrange-Act-Assert) structure ✅ Complete type hints ✅ Google-style docstrings ✅ Fixture templates (when needed)
Usage Examples
Generate Tests for a Module
# User provides source file
generate tests for src/python_modern_template/validators.py
Output: Creates tests/test_validators.py with:
- Test class for each function/class
- Parametrized test cases
- Edge case test templates
- Proper imports and structure
Generate Tests for a Specific Function
# User describes the function
generate tests for validate_email function that checks email format
Output: Creates test class with:
- Basic functionality tests
- Edge cases (empty, invalid format, special characters)
- Parametrized test cases
- Error handling tests
Generate Fixture Template
# User needs test fixtures
generate pytest fixture for database connection
Output: Creates conftest.py entry with proper fixture structure
Step-by-Step Process
Step 1: Analyze Source Code
If source file provided:
- Read the source file
- Identify all public functions and classes
- Analyze function signatures (parameters, return types)
- Identify error cases (raises clauses)
If described by user:
- Extract function/class name
- Understand parameters and return type
- Identify test scenarios from description
Step 2: Choose Template
Select appropriate template from templates/:
test_function.py- Simple function teststest_class.py- Class with methods teststest_parametrized.py- Data-driven teststest_async.py- Async function teststest_exception.py- Error handling testsconftest_fixture.py- Pytest fixtures
Step 3: Generate Test Code
Use templates from .claude/skills/test-generator/templates/ directory.
Template Variables:
{module_name}- Source module name{class_name}- Class being tested{function_name}- Function being tested{test_class_name}- Generated test class name{import_path}- Full import path{param_names}- Function parameters{return_type}- Function return type
Step 4: Add Test Cases
Generate test cases for:
- Happy path - Normal successful execution
- Edge cases - Boundary values, empty inputs, None values
- Error cases - Invalid inputs, exceptions
- Parametrized cases - Multiple input combinations
Step 5: Format and Save
- Apply proper formatting (Black, isort)
- Ensure type hints and docstrings
- Save to appropriate test file
- Display summary of generated tests
Templates Reference
Template: Simple Function Test
File: templates/test_function.py
"""Tests for {module_name}.{function_name}."""
from __future__ import annotations
import pytest
from {import_path} import {function_name}
class Test{FunctionNameCamelCase}:
"""Test cases for {function_name}."""
def test_{function_name}_basic(self) -> None:
"""Test basic functionality of {function_name}."""
# Arrange
# TODO: Set up test data
# Act
result = {function_name}()
# Assert
# TODO: Add assertions
assert result is not None
def test_{function_name}_with_valid_input(self) -> None:
"""Test {function_name} with valid input."""
# Arrange
# TODO: Prepare valid input
# Act
# TODO: Call function
# Assert
# TODO: Verify result
pass
def test_{function_name}_with_invalid_input(self) -> None:
"""Test {function_name} handles invalid input."""
# Arrange
# TODO: Prepare invalid input
# Act & Assert
with pytest.raises(ValueError):
{function_name}() # TODO: Add invalid input
def test_{function_name}_edge_cases(self) -> None:
"""Test {function_name} edge cases."""
# TODO: Test empty input, None, boundary values
pass
Template: Parametrized Test
File: templates/test_parametrized.py
"""Tests for {module_name}.{function_name}."""
from __future__ import annotations
import pytest
from {import_path} import {function_name}
class Test{FunctionNameCamelCase}:
"""Parametrized test cases for {function_name}."""
@pytest.mark.parametrize(
"input_value,expected",
[
# Happy path cases
("valid_input_1", "expected_output_1"),
("valid_input_2", "expected_output_2"),
# Edge cases
("", "expected_for_empty"),
(None, "expected_for_none"),
# TODO: Add more test cases
],
)
def test_{function_name}_parametrized(
self,
input_value: str, # TODO: Adjust type
expected: str, # TODO: Adjust type
) -> None:
"""Test {function_name} with various inputs."""
# Act
result = {function_name}(input_value)
# Assert
assert result == expected
@pytest.mark.parametrize(
"invalid_input,expected_error",
[
("invalid_1", ValueError),
("invalid_2", TypeError),
# TODO: Add more error cases
],
)
def test_{function_name}_error_cases(
self,
invalid_input: str, # TODO: Adjust type
expected_error: type[Exception],
) -> None:
"""Test {function_name} raises appropriate errors."""
# Act & Assert
with pytest.raises(expected_error):
{function_name}(invalid_input)
Template: Class with Methods Test
File: templates/test_class.py
"""Tests for {module_name}.{class_name}."""
from __future__ import annotations
import pytest
from {import_path} import {class_name}
class Test{ClassName}:
"""Test cases for {class_name}."""
@pytest.fixture
def instance(self) -> {class_name}:
"""Create {class_name} instance for testing."""
return {class_name}() # TODO: Add initialization parameters
def test_initialization(self, instance: {class_name}) -> None:
"""Test {class_name} initialization."""
# Assert
assert isinstance(instance, {class_name})
# TODO: Verify initial state
def test_method_name(self, instance: {class_name}) -> None:
"""Test method_name behavior."""
# Arrange
# TODO: Set up test data
# Act
result = instance.method_name() # TODO: Add actual method
# Assert
# TODO: Verify result
assert result is not None
Template: Async Function Test
File: templates/test_async.py
"""Tests for async {module_name}.{function_name}."""
from __future__ import annotations
import pytest
from {import_path} import {function_name}
class Test{FunctionNameCamelCase}:
"""Test cases for async {function_name}."""
@pytest.mark.asyncio
async def test_{function_name}_basic(self) -> None:
"""Test basic async functionality."""
# Arrange
# TODO: Set up test data
# Act
result = await {function_name}()
# Assert
assert result is not None
@pytest.mark.asyncio
async def test_{function_name}_concurrent(self) -> None:
"""Test concurrent execution."""
# Arrange
import asyncio
# Act
results = await asyncio.gather(
{function_name}(),
{function_name}(),
)
# Assert
assert len(results) == 2
Template: Pytest Fixture
File: templates/conftest_fixture.py
"""Pytest fixtures for {test_module}."""
from __future__ import annotations
from collections.abc import Generator
import pytest
@pytest.fixture
def {fixture_name}() -> Generator[{ResourceType}, None, None]:
"""Provide {description} for tests.
Yields:
{ResourceType}: {Description of what is yielded}
"""
# Setup
resource = {ResourceType}() # TODO: Initialize resource
try:
yield resource
finally:
# Teardown
pass # TODO: Add cleanup code
Quality Standards
All generated tests MUST include:
✅ Type Hints
- All test functions annotated with
-> None - All fixture functions with proper return types
- All parameters with type hints
✅ Docstrings
- Every test function has descriptive docstring
- Docstring explains WHAT is being tested, not HOW
- Clear and concise
✅ AAA Pattern
def test_example() -> None:
"""Test description."""
# Arrange - Set up test data
data = prepare_test_data()
# Act - Execute the function
result = function_under_test(data)
# Assert - Verify the result
assert result == expected
✅ Proper Imports
from __future__ import annotations # Always first
import pytest
from python_modern_template.module import function
✅ No Excessive Mocking
- Use real code where possible
- Only mock: HTTP, DB, File I/O, time, random
- Never mock: internal functions, validators, processors
Output Format
After generating tests, provide:
## Generated Test File: tests/test_{module}.py
**Summary:**
- Test Classes: X
- Test Functions: X
- Parametrized Tests: X
- Fixtures: X
- Lines of Code: ~XXX
**Test Coverage:**
- Function 1: 4 test cases (happy path, edge cases, error handling)
- Function 2: 3 test cases
- Total: X test cases
**Next Steps:**
1. Review generated tests and customize TODO sections
2. Add specific test data for your use case
3. Run tests to verify they fail (TDD red phase):
```bash
make test
- Implement the functionality
- Run tests again to verify they pass
- Check coverage:
make coverage
Files Created/Modified:
- ✅ tests/test_{module}.py (NEW)
- ⚠️ Remember to implement tests before the actual code (TDD!)
## Integration with Project Tools
This skill integrates with:
- `make test` - Run generated tests
- `make coverage` - Verify test coverage
- `make format` - Format generated code
- `tdd-reviewer` agent - Verify TDD compliance
## Script Usage
For programmatic generation, use:
```bash
# Generate from source file
python .claude/skills/test-generator/scripts/generate.py \
--source src/python_modern_template/module.py \
--output tests/test_module.py
# Generate from function name
python .claude/skills/test-generator/scripts/generate.py \
--function validate_email \
--module validators \
--template parametrized
Best Practices
- Start with generated boilerplate, then customize
- Review all TODO comments - replace with actual test data
- Run tests immediately - they should fail initially (TDD)
- Use parametrized tests for multiple input cases
- Avoid over-mocking - use real code where possible
- Aim for edge cases - empty, None, invalid, boundary values
- Test one thing per test - focused, clear assertions
Examples in Action
Example 1: Generate tests for email validator
User Request:
generate tests for the validate_email function in src/python_modern_template/validators.py
Skill Actions:
- Read src/python_modern_template/validators.py
- Identify validate_email function signature
- Choose parametrized template (multiple test cases)
- Generate tests/test_validators.py with:
- Valid email cases
- Invalid email cases (no @, multiple @, etc.)
- Edge cases (empty, whitespace, very long)
- Error handling tests
Example 2: Generate tests for a new class
User Request:
I'm creating a UserManager class with methods: create_user, delete_user, find_user. Generate tests.
Skill Actions:
- Choose class template
- Generate test class with:
- Fixture for UserManager instance
- Test for each method
- Test initialization
- Test error cases
- Add parametrized tests for find_user (various search criteria)
Example 3: Generate async tests
User Request:
Generate tests for async function fetch_data(url: str) -> dict
Skill Actions:
- Choose async template
- Generate async tests with:
- Basic fetch test with mock HTTP
- Error handling (timeout, 404, 500)
- Concurrent fetch tests
- Add pytest-asyncio marker
Remember
TDD Workflow:
- Generate tests FIRST (this skill)
- Run tests (they should FAIL)
- Implement code
- Run tests (they should PASS)
- Refactor while keeping tests green
This skill helps you start TDD right—tests first, always!