Claude Code Plugins

Community-maintained marketplace

Feedback

pytest-best-practices

@cfircoo/claude-code-toolkit
3
0

Expert guidance for writing high-quality pytest tests. Use when writing tests, setting up fixtures, parametrizing, mocking, or reviewing test code.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name pytest-best-practices
description Expert guidance for writing high-quality pytest tests. Use when writing tests, setting up fixtures, parametrizing, mocking, or reviewing test code.
Provide pytest best practices and patterns for writing maintainable, efficient tests.

Test Independence

  • Each test must run in isolation - no shared state between tests
  • Use fixtures for setup/teardown, never class-level mutable state
  • Tests should pass regardless of execution order

Naming Conventions

  • Files: test_*.py or *_test.py
  • Functions: test_<description>()
  • Classes: Test<ClassName>
  • Fixtures: descriptive lowercase_with_underscores

Directory Structure

tests/
├── conftest.py          # Shared fixtures
├── unit/
│   └── test_module.py
├── integration/
│   └── test_api.py
└── fixtures/            # Test data files

Core Testing Rules

  • Use plain assert statements (pytest provides detailed failure messages)
  • One logical assertion per test when practical
  • Test edge cases: empty inputs, boundaries, invalid data, errors
  • Keep tests focused and readable
Pattern Use Case
@pytest.fixture Setup/teardown, dependency injection
@pytest.mark.parametrize Run test with multiple inputs
@pytest.mark.skip Skip test temporarily
@pytest.mark.xfail Expected failure (known bug)
pytest.raises(Exception) Test exception raising
pytest.approx(value) Float comparison
mocker.patch() Mock dependencies
conftest.py Share fixtures across modules

Common Commands

pytest -v                    # Verbose
pytest -x                    # Stop on first failure
pytest --lf                  # Run last failed
pytest -k "pattern"          # Match test names
pytest -m "marker"           # Run marked tests
pytest --cov=src             # Coverage report

Based on what you're doing, read the relevant reference:

Task Reference
Setting up fixtures, scopes, factories references/fixtures.md
Parametrizing tests, multiple inputs references/parametrization.md
Mocking, patching, faking dependencies references/mocking.md
Markers, exceptions, assertions, async references/patterns.md
```bash pip install pytest pytest-asyncio pytest-mock pytest-cov pytest-xdist ```