| name | tdd-pytest |
| description | Python/pytest TDD specialist for test-driven development workflows. Use when writing tests, auditing test quality, running pytest, or generating test reports. Integrates with uv and pyproject.toml configuration. |
| author | Joseph OBrien |
| status | unpublished |
| updated | 2025-12-23 |
| version | 1.0.1 |
| tag | skill |
| type | skill |
TDD-Pytest Skill
Activate this skill when the user needs help with:
- Writing tests using TDD methodology (Red-Green-Refactor)
- Auditing existing pytest test files for quality
- Running tests with coverage
- Generating test reports to
TESTING_REPORT.local.md - Setting up pytest configuration in
pyproject.toml
TDD Workflow
Red-Green-Refactor Cycle
RED - Write a failing test first
- Test should fail for the right reason (not import errors)
- Test should be minimal and focused
- Show the failing test output
GREEN - Write minimal code to pass
- Only implement what's needed to pass the test
- No premature optimization
- Show the passing test output
REFACTOR - Improve code while keeping tests green
- Clean up duplication
- Improve naming
- Extract functions/classes if needed
- Run tests after each change
Test Organization
File Structure
project/
src/
module.py
tests/
conftest.py # Shared fixtures
test_module.py # Tests for module.py
pyproject.toml # Pytest configuration
Naming Conventions
- Test files:
test_*.pyor*_test.py - Test functions:
test_* - Test classes:
Test* - Fixtures: Descriptive names (
mock_database,sample_user)
Pytest Best Practices
Fixtures
import pytest
@pytest.fixture
def sample_config():
return {"key": "value"}
@pytest.fixture
def mock_client(mocker):
return mocker.MagicMock()
Parametrization
@pytest.mark.parametrize("input,expected", [
("hello", "HELLO"),
("world", "WORLD"),
("", ""),
])
def test_uppercase(input, expected):
assert input.upper() == expected
Async Tests
import pytest
@pytest.mark.asyncio
async def test_async_function():
result = await async_operation()
assert result == expected
Exception Testing
def test_raises_value_error():
with pytest.raises(ValueError, match="invalid input"):
process_input(None)
Running Tests
With uv
uv run pytest # Run all tests
uv run pytest tests/test_module.py # Run specific file
uv run pytest -k "test_name" # Run by name pattern
uv run pytest -v --tb=short # Verbose with short traceback
uv run pytest --cov=src --cov-report=term # With coverage
Common Flags
-v/--verbose- Detailed output-x/--exitfirst- Stop on first failure--tb=short- Short tracebacks--tb=no- No tracebacks-k EXPR- Run tests matching expression-m MARKER- Run tests with marker--cov=PATH- Coverage for path--cov-report=term-missing- Show missing lines
pyproject.toml Configuration
Minimal Setup
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
Full Configuration
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
addopts = "-v --tb=short"
markers = [
"slow: marks tests as slow",
"integration: marks integration tests",
]
filterwarnings = [
"ignore::DeprecationWarning",
]
[tool.coverage.run]
source = ["src"]
branch = true
omit = ["tests/*", "*/__init__.py"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"raise NotImplementedError",
]
fail_under = 80
show_missing = true
Report Generation
The TESTING_REPORT.local.md file should contain:
- Test execution summary (passed/failed/skipped)
- Coverage metrics by module
- Audit findings by severity
- Recommendations with file:line references
- Evidence (command outputs)
Integration with Conversation
When the user asks to write tests:
- Check conversation history for context about what to test
- Identify the code/feature being discussed
- If unclear, ask clarifying questions:
- "What specific behavior should I test?"
- "Should I include edge cases for X?"
- "Do you want unit tests, integration tests, or both?"
- Follow TDD: Write failing test first, then implement
Commands Available
/tdd-pytest:init- Initialize pytest configuration/tdd-pytest:test [path]- Write tests using TDD (context-aware)/tdd-pytest:test-all- Run all tests/tdd-pytest:report- Generate/update TESTING_REPORT.local.md