| name | test-orchestrator |
| description | Set up testing infrastructure and strategy for a project. This skill should be used when a project is ready for testing setup, including test framework configuration, initial test scaffolding, and documentation of testing approach. Primarily a setup skill with guidance for ongoing testing. |
| allowed-tools | Read, Grep, Glob, Write, Bash |
test-orchestrator
Tech stack indicators:
- package.json (Node.js/frontend framework)
- requirements.txt / pyproject.toml (Python)
- composer.json (PHP)
- claude.md (project context)
Existing test configuration:
- jest.config.js, vitest.config.ts
- pytest.ini, pyproject.toml [tool.pytest]
- phpunit.xml
- tests/ or tests/ directories
Framework-specific patterns:
- Next.js: app/, pages/, components/
- FastAPI: app/, routers/, models/
- PHP: src/, Controllers/, Models/
Existing tests:
- Count and categorize existing test files
- Identify testing patterns in use
Configuration:
- vitest.config.ts with jsdom environment
- @testing-library/react for component assertions
- MSW (Mock Service Worker) for API mocking
Configuration:
- pytest.ini or pyproject.toml [tool.pytest]
- conftest.py for fixtures
- TestClient for FastAPI endpoint testing
Configuration:
- phpunit.xml
- tests/Unit/, tests/Feature/ directories
Configuration:
- jest.config.js
- Test database configuration
Testing Framework: {framework} Test Types:
- Unit tests for {specific areas}
- Integration tests for {specific areas}
- E2E tests for {specific areas} (optional)
Coverage Goal: Start with critical paths, aim for 70-80% over time
Does this approach work for you? Any specific areas you want to prioritize?
export default defineConfig({ plugins: [react()], test: { environment: 'jsdom', globals: true, setupFiles: ['./tests/setup.ts'], include: ['**/*.{test,spec}.{js,ts,jsx,tsx}'], coverage: { reporter: ['text', 'json', 'html'], exclude: ['node_modules/', 'tests/'], }, }, resolve: { alias: { '@': path.resolve(__dirname, './src'), }, }, })
</vitest-config>
<jest-config>
```javascript
// jest.config.js
module.exports = {
testEnvironment: 'jsdom',
setupFilesAfterEnv: ['<rootDir>/tests/setup.js'],
moduleNameMapper: {
'^@/(.*)$': '<rootDir>/src/$1',
},
collectCoverageFrom: [
'src/**/*.{js,jsx,ts,tsx}',
'!src/**/*.d.ts',
],
testMatch: ['**/__tests__/**/*.[jt]s?(x)', '**/?(*.)+(spec|test).[jt]s?(x)'],
}
[tool.coverage.run] source = ["app"] omit = ["tests/", "alembic/"]
[tool.coverage.report] exclude_lines = [ "pragma: no cover", "if TYPE_CHECKING:", ]
</pytest-config>
<phpunit-config>
```xml
<!-- phpunit.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="vendor/phpunit/phpunit/phpunit.xsd"
bootstrap="vendor/autoload.php"
colors="true">
<testsuites>
<testsuite name="Unit">
<directory>tests/Unit</directory>
</testsuite>
<testsuite name="Feature">
<directory>tests/Feature</directory>
</testsuite>
</testsuites>
<coverage>
<include>
<directory suffix=".php">src</directory>
</include>
</coverage>
</phpunit>
{
"scripts": {
"test": "vitest",
"test:run": "vitest run",
"test:coverage": "vitest run --coverage",
"test:watch": "vitest --watch",
"test:ui": "vitest --ui"
}
}
Or for Python (in pyproject.toml):
[tool.poetry.scripts]
test = "pytest:main"
describe('Button Component', () => { it('renders with text', () => { render() expect(screen.getByRole('button')).toHaveTextContent('Click me') })
it('handles click events', async () => { const handleClick = vi.fn() render()
await userEvent.click(screen.getByRole('button'))
expect(handleClick).toHaveBeenCalledTimes(1)
})
it('can be disabled', () => { render() expect(screen.getByRole('button')).toBeDisabled() }) })
</example-unit-test-react>
<example-integration-test-api>
```typescript
// tests/integration/api.test.ts
import { describe, it, expect, beforeAll, afterAll } from 'vitest'
describe('API Endpoints', () => {
beforeAll(async () => {
// Setup: start test server, seed database
})
afterAll(async () => {
// Cleanup: stop server, clear database
})
describe('GET /api/items', () => {
it('returns list of items', async () => {
const response = await fetch('http://localhost:3000/api/items')
const data = await response.json()
expect(response.status).toBe(200)
expect(Array.isArray(data)).toBe(true)
})
it('returns 401 without authentication', async () => {
const response = await fetch('http://localhost:3000/api/protected')
expect(response.status).toBe(401)
})
})
})
@pytest.fixture async def client(): async with AsyncClient(app=app, base_url="http://test") as ac: yield ac
class TestItems: async def test_get_items(self, client): response = await client.get("/api/items") assert response.status_code == 200 assert isinstance(response.json(), list)
async def test_create_item(self, client):
response = await client.post(
"/api/items",
json={"name": "Test Item", "description": "A test"}
)
assert response.status_code == 201
assert response.json()["name"] == "Test Item"
async def test_unauthorized_access(self, client):
response = await client.get("/api/protected")
assert response.status_code == 401
</example-pytest>
<setup-file>
```typescript
// tests/setup.ts
import '@testing-library/jest-dom'
import { cleanup } from '@testing-library/react'
import { afterEach } from 'vitest'
// Cleanup after each test
afterEach(() => {
cleanup()
})
// Global mocks
vi.mock('next/navigation', () => ({
useRouter: () => ({
push: vi.fn(),
replace: vi.fn(),
prefetch: vi.fn(),
}),
useSearchParams: () => new URLSearchParams(),
usePathname: () => '/',
}))
Project: {project_name} Created: {date} Framework: {test_framework}
Testing Philosophy
This project follows a testing pyramid approach:
- Many unit tests (fast, isolated)
- Fewer integration tests (verify component interaction)
- Few E2E tests (critical user paths only)
Test Types
Unit Tests
Location: tests/unit/
Purpose: Test individual functions and components in isolation
Run: npm test or pytest tests/unit
What to test:
- Pure functions (utils, helpers)
- Component rendering
- State management logic
- Data transformations
Integration Tests
Location: tests/integration/
Purpose: Test API endpoints and component interactions
Run: npm test tests/integration or pytest tests/integration
What to test:
- API endpoint responses
- Database operations
- Authentication flows
- Multi-component interactions
E2E Tests (Optional)
Location: tests/e2e/
Purpose: Test complete user flows
Run: npm run test:e2e
What to test:
- Critical user journeys
- Happy path flows
- Authentication end-to-end
Test Commands
| Command | Description |
|---|---|
npm test |
Run all tests once |
npm run test:watch |
Run tests in watch mode |
npm run test:coverage |
Run tests with coverage report |
npm run test:ui |
Open visual test UI |
Coverage Goals
Current: {current}% Target: 70-80% for critical paths
Focus coverage on:
- Business logic
- API endpoints
- Authentication
- Data validation
Writing New Tests
Naming Convention
- Test files:
*.test.tsor*.spec.ts - Test descriptions: "should [expected behavior] when [condition]"
Test Structure
describe('ComponentName', () => {
describe('method or behavior', () => {
it('should do something when condition', () => {
// Arrange
// Act
// Assert
})
})
})
Best Practices
- One assertion per test (when practical)
- Test behavior, not implementation
- Use descriptive test names
- Keep tests independent
- Mock external dependencies
CI Integration
Tests run automatically on:
- Every push to
mainanddevbranches - Every pull request
See .github/workflows/ci.yml for configuration.
Resources
</strategy-template>
</phase>
<phase id="5" name="provide-guidance">
<action>Summarize what was created and provide educational guidance.</action>
<summary-template>
## Test Infrastructure Setup Complete
**Project:** {project_name}
**Framework:** {test_framework}
---
### Files Created
- {config_file} - Test framework configuration
- tests/setup.{ext} - Test setup and global mocks
- tests/unit/example.test.{ext} - Example unit test
- tests/integration/api.test.{ext} - Example integration test
- .docs/test-strategy.md - Testing strategy documentation
---
### Test Commands
```bash
# Run all tests
{run_command}
# Run tests in watch mode
{watch_command}
# Run with coverage
{coverage_command}
What's Next
Your testing infrastructure is set up. Here's how to proceed:
Run the example tests to verify setup:
{run_command}Write tests as you build features:
- Write tests before or alongside new code
- Focus on critical business logic first
- Use example tests as templates
Maintain test health:
- Keep tests passing
- Review coverage periodically
- Update tests when behavior changes
Educational Notes
Testing Pyramid:
- Unit tests are fast and numerous - test small pieces in isolation
- Integration tests verify components work together
- E2E tests are slow but valuable for critical paths
When to write tests:
- Before implementing (TDD) - helps design better code
- After implementing - validates behavior
- When fixing bugs - prevent regression
Test-Driven Development (TDD) cycle:
- Write a failing test
- Write minimum code to pass
- Refactor while keeping tests green
Workflow Status
This is an optional phase in the Skills workflow.
Previous: Phase 3 (project-spinup) - Project foundation Current: Phase 4 (test-orchestrator) - Test infrastructure Next: Phase 5 (deploy-guide) - Deploy your application
You can continue building features and write tests as you go. When ready to deploy, use the deploy-guide skill.
Status: Phase 0: Project Brief (project-brief-writer) Phase 1: Tech Stack (tech-stack-advisor) Phase 2: Deployment Strategy (deployment-advisor) Phase 3: Project Foundation (project-spinup) <- TERMINATION POINT (localhost) Phase 4: Test Strategy (you are here) - optional Phase 5: Deployment (deploy-guide) <- TERMINATION POINT (manual deploy) Phase 6: CI/CD (ci-cd-implement) <- TERMINATION POINT (full automation)
Mention this option when users seem uncertain about their progress.