| name | testing |
| description | Provides expertise on how to write effective unit tests (runtime and type tests), make testing decisions, and incorporate TDD into development workflows |
Testing Skill
Overview
This skill provides comprehensive guidance on testing in TypeScript projects, with a focus on the dual nature of testing: runtime behavior and type correctness. It covers when to use each type of test, how to structure tests effectively, and how to integrate testing into development workflows using Test-Driven Development (TDD).
Types of Testing
1. Runtime Tests
Runtime tests verify the actual behavior of code during execution.
When to use:
- Testing function outputs with various inputs
- Verifying error handling and edge cases
- Checking side effects (file I/O, API calls, state mutations)
- Validating business logic and algorithms
- Testing class instance behavior and methods
Tools:
- Test runner: Vitest
- Commands:
pnpm test- runs all runtime testspnpm test GLOB- runs tests matching the glob pattern
Example structure:
import { describe, it, expect } from "vitest";
import { prettyPath } from "~/utils";
describe("prettyPath()", () => {
it("should format a path with directory and filename", () => {
const result = prettyPath("/path/to/file.ts");
expect(result).toBe("/path/to/file.ts"); // Expected formatting
});
it("should handle edge case: empty string", () => {
const result = prettyPath("");
expect(result).toBe("");
});
it("should handle edge case: root path", () => {
const result = prettyPath("/");
expect(result).toBe("/");
});
});
2. Type Tests
Type tests verify the type correctness of TypeScript code at design time.
When to use:
- Testing type utility functions (always)
- Verifying generic type constraints work as expected
- Ensuring conditional types resolve correctly
- Testing that complex inferred types are accurate
- Validating discriminated unions and type narrowing
- Checking that function signatures accept/reject correct types
Tools:
- Commands:
pnpm test:types- runs all type testspnpm test:types GLOB- runs type tests matching the glob pattern
Type Test Structure
This section will show you, layer by layer, how to compose and build good type tests.
cases block
All type tests in a given it test block (defined by Vitest) will have a type called cases defined as an array of type tests.
type cases = [
// ... type tests go here
]
Note: our linting rules allow for the name
casesto be defined without being used; this is intentional and a good thing.
Expect<...> wrapper
Every type test will be wrapped by an Expect type utility.
type cases = [
Expect<...>,
Expect<...>,
// ...
]
Available Type Test Assertions
The inferred-types library provides a number of useful assertion utilities you can use to create your tests:
AssertTrue<T>****- tests whether the tested type
Tis the typetrue
- tests whether the tested type
AssertFalse<T>- tests whether the tested type
Tis the typefalse
- tests whether the tested type
AssertEqual<T,E>- tests that the tested type
Tequals the expected typeE
- tests that the tested type
AssertExtends<T,E>- tests that the tested type
Textends the expected typeE
- tests that the tested type
AssertSameValues<T,E>- tests that the tested type
Tis an array type and every element ofEandTare the same but the order in which they arrive does not matter
- tests that the tested type
AssertContains<T,E>- when the tested type
Tis astring:- this utility will pass when
Eis also astringand represents a sub-string of the sting literalT
- this utility will pass when
- when the tested type
Tis an array then:- this utility
- when the tested type
In all cases you put the test assertion inside of the Expect utility:
type cases [
Expect<AssertTrue<T>>,
Expect<AssertExtends<T, string>>,
// ...
]
Example 1
In our example we'll just test a built-in type utility of Typescript's named Capitalize<T>.
- this utility simply capitalizes the first letter in a string literal
import type { Expect, Equal } from "inferred-types/types";
describe("Example 1", () => {
it("string literals", () => {
type Lowercase = Capitalize<"foo">;
type AlreadyCapitalized = Capitalize<"Foo">;
type cases = [
Expect<AssertEqual<Lowercase, "Foo">>,
Expect<AssertEqual<AlreadyCapitalized, "Foo">>,
]
});
it("wide string", () => {
type Wide = Capitalize<string>;
type cases = [
Expect<AssertEqual<Wide, string>>
]
})
it("only first letter capitalized", () => {
type SpaceThenLetter = Capitalize<" foo">;
type TabThenLetter = Capitalize<"\tfoo">;
type cases = [
Expect<AssertEqual<SpaceThenLetter, " foo">>,
Expect<AssertEqual<TabThenLetter, "\tfoo">>,
]
})
});
IMPORTANT: in the example above we were testing a type utility (where a type utility is any type which accepts generics and uses them to produce a type); and with type utilities you CAN'T do runtime testing because there is no runtime component to test. However, we do still use the Vitest primitives of describe and it to organize the test.
Example 2
Let's imagine we create a simple function:
capitalize<T extends string>(text: T): Capitalize<T>.- here we have a VERY common situation for library authors:
- a function which provides a narrow type return
- in this situation we will want to have BOTH runtime and type tests
describe("example", () => {
it("leading alpha character", () => {
const lowercase = capitalize("foo");
const alreadyCapitalized = capitalize("Foo");
expect(lowercase).toEqual("Foo");
expect(alreadyCapitalized).toEqual("Foo");
type cases = [
Expect<AssertEqual<typeof lowercase, "Foo">>,
Expect<AssertEqual<typeof alreadyCapitalized, "Foo">>,
]
});
it("wide string", () => {
const wide = capitalize("foo" as string);
expect(wide).toBe("Foo");
type cases = [
Expect<AssertEqual<typeof wide, string>>
]
})
it("non-alpha leading character", () => {
const spaceThenLetter = capitalize(" foo");
const tabThenLetter = capitalize("\tfoo");
expect(spaceThenLetter).toBe(" foo");
expect(tabThenLetter).toBe("\tfoo");
type cases = [
Expect<AssertEqual<typeof spaceThenLetter, " foo">>,
Expect<AssertEqual<typeof tabThenLetter, "\tfoo">>,
]
})
})
IMPORTANT: in these sorts of tests the runtime and type tests naturally fit into the same describe/it blocks. You should almost NEVER have a set of runtime tests in one structure, and then a set of type tests in another. This almost always indicates someone who doesn't understand type testing well enough yet.
IMPORTANT: in both examples we've see a test structure where define intermediate variable/types which assume the value/type of the "test". Then we use the variable/type in our tests. We could possibly just inline the expression you're testing into the runtime and type tests but this can actually have undesirable side effects in some cases but having the intermediate variables/types defined first allows a human observer to hover over the variable to see what type resolution there was. This is highly valuable!
Common Type Testing Mistakes
Mistake #1: Separated "Type Tests" Blocks (MOST COMMON)
❌ WRONG - Separated structure:
describe("myFunction()", () => {
describe("Runtime tests", () => {
it("should work", () => {
expect(myFunction("test")).toBe("result");
});
});
describe("Type Tests", () => { // ❌ WRONG!
it("should have correct type", () => {
const result = myFunction("test");
const _check: typeof result extends string ? true : false = true;
expect(_check).toBe(true); // ❌ This is NOT a type test!
});
});
});
✅ CORRECT - Integrated structure:
describe("myFunction()", () => {
it("should work with string input", () => {
const result = myFunction("test");
// Runtime test
expect(result).toBe("result");
// Type test - in the SAME it() block
type cases = [
Expect<AssertEqual<typeof result, "result">>
];
});
});
Mistake #2: Using Runtime Checks for Type Testing
❌ WRONG:
const result = myFunction("test");
const _isString: typeof result extends string ? true : false = true;
expect(_isString).toBe(true); // This is runtime testing, not type testing!
✅ CORRECT:
const result = myFunction("test");
type cases = [
Expect<AssertExtends<typeof result, string>>
];
Mistake #3: No cases Array
❌ WRONG:
Expect<AssertEqual<typeof result, "expected">>; // Not in cases array!
✅ CORRECT:
type cases = [
Expect<AssertEqual<typeof result, "expected">>
];
Mistake #4: Using typeof with Runtime Assertions
❌ WRONG:
const result = myFunction("test");
expect(typeof result).toBe("string"); // This is runtime, not type testing!
✅ CORRECT:
const result = myFunction("test");
// Runtime test (if needed)
expect(result).toBe("expected-value");
// Type test
type cases = [
Expect<AssertExtends<typeof result, string>>
];
Type Test Validation
Before submitting ANY work with type tests, verify:
- Pattern check: Does every type test use
type cases = [...]? - Assertion check: Does every assertion use
Expect<Assert...>? - Structure check: Are type tests side-by-side with runtime tests?
- Import check: Do files import from
inferred-types/types? - No separation: Are there ZERO "Type Tests" describe blocks?
- Tests pass: Does
pnpm test:typesshow "🎉 No errors!"?
If any check fails, the type tests are incorrect and must be rewritten.
Decision Framework: Which Tests to Write?
Use this flowchart to determine what tests you need:
Is the symbol exported from the module?
│
├─ NO → Consider if it needs tests at all
│ (internal helpers may not need dedicated tests)
│
└─ YES → What kind of symbol is it?
│
├─ Type Utility (e.g., a type which takes generics)
│ └─ Write TYPE TESTS always; no RUNTIME tests are even possible!
│
├─ Constant (literal value)
│ └─ Usually NO tests needed
│ (unless it's a complex computed value)
│
├─ Function / Arrow Function
│ └─ Does it return a literal type?
│ ├─ YES → Write BOTH runtime AND type tests
│ └─ NO → Write RUNTIME tests (minimum); possibly write type tests
│
├─ Class
│ └─ Does it use generics or have methods which return literal types?
│ ├─ YES → Write BOTH runtime AND type tests
│ └─ NO → Write RUNTIME tests primarily
│
└─ Interface / Type Definition (e.g., a type without a generic input)
└─ Usually NO test needed; if there is no generic then there is no variance to test
└─ Only exception might be when the type being defined uses a lot of type utilities in it's definition. In these cases, you _might_ test that the type is not an `any` or `never` type because the underlying utilities
Rule of thumb: When in doubt, write tests. It's better to have coverage than to skip it.
Test Organization and Structure
File Structure
Tests are organized by feature/command area:
tests/
├── unit/
│ ├── test-command/ # Tests for the 'test' CLI command
│ ├── symbol-command/ # Tests for the 'symbols' CLI command
│ ├── source-command/ # Tests for the 'source' CLI command
│ ├── utils/ # Tests for utility functions
│ └── WIP/ # Temporary location for in-progress phase tests
├── integration/
│ ├── fast/ # Fast integration tests (<2s each)
│ └── *.test.ts # Full integration tests
└── fixtures/ # Test fixtures and sample projects
Naming Conventions
- Test files:
*.test.ts - Fast integration tests:
*.fast.test.ts - Test descriptions:
- Use "should" statements:
it("should return true when...) - Be specific about the scenario:
it("should handle empty arrays") - Describe the behavior, not the implementation
- Use "should" statements:
Test Structure Principles
DO:
- Keep tests focused on a single behavior
- Use descriptive test names that explain the scenario
- Group related tests in
describeblocks - Test edge cases (empty, null, undefined, boundary values)
- Test error conditions and failure modes
- Make tests independent (no shared state between tests)
DON'T:
- Test implementation details (test behavior, not internals)
- Add logic to tests (no conditionals, loops, or complex computations)
- Share mutable state between tests
- Make tests depend on execution order
- Skip asserting the results (every test needs expectations)
TDD Workflow for Phase-Based Development
When implementing a new feature or phase of work, follow this comprehensive TDD workflow:
Phase Structure Overview
- SNAPSHOT - Capture current test state
- CREATE LOG - Document starting position
- WRITE TESTS - Create tests first (TDD)
- IMPLEMENTATION - Build to pass tests
- CLOSE OUT - Verify, migrate tests, document completion
Step 1: SNAPSHOT
Capture the current state of all tests before making any changes.
Actions:
Run all runtime tests:
pnpm testRun all type tests:
pnpm test:typesCreate a simple XML representation of test results distinguishing between runtime and type test runs
Document any existing failures (these are your baseline - don't fix yet)
Purpose: Establish a clear baseline so you can detect regressions and measure progress.
Step 2: CREATE LOG
Create a log file to track this phase of work.
Actions:
Create log file with naming convention:
mkdir -p .ai/logs touch .ai/logs/YYYY-MM-planName-phaseN-log.mdExample:
.ai/logs/2025-10-symbol-filtering-phase1-log.mdAdd
## Starting Test Positionsection with XML code block containing test results from SNAPSHOTAdd
## Repo Starting PositionsectionRun the start-position script to capture git state:
bun run .claude/skills/scripts/start-position.ts planName phaseNumberThis returns markdown content showing:
- Last local commit hash
- Last remote commit hash
- Dirty files (uncommitted changes)
- File snapshot (if not using --dry-run flag)
Append the start-position output to the log file
Purpose: Create a detailed record of the starting point for debugging and tracking progress.
Step 3: WRITE TESTS
Write tests FIRST before any implementation. This is true Test-Driven Development.
Actions:
Understand existing test structure:
- Review similar tests in the codebase
- Identify patterns and conventions
- Determine where your tests should eventually live
Create tests in WIP directory:
- All new test files for this phase go in
tests/unit/WIP/ - This isolation allows:
- Easy GLOB pattern targeting:
pnpm test WIP - Regression testing by exclusion:
pnpm test --exclude WIP - Clear separation of work-in-progress from stable tests
- Easy GLOB pattern targeting:
- All new test files for this phase go in
Write comprehensive test coverage:
- Start with happy path (expected successful behavior)
- Add edge cases (empty, null, undefined, boundaries)
- Add error conditions
- Include both runtime and type tests if applicable
Verify tests FAIL initially:
- Run your new tests:
pnpm test WIP - Confirm they fail (you haven't implemented yet)
- Failing tests prove they're valid and will detect when implementation is complete
- Run your new tests:
Example WIP structure:
tests/unit/WIP/
├── phase1-cli-options.test.ts
├── phase1-filter-logic.test.ts
└── phase1-integration.test.ts
Purpose: Tests define the contract and expected behavior before any code is written.
Step 4: IMPLEMENTATION
Use the tests to guide your implementation.
Actions:
Implement minimal code to pass each test:
- Work on one test at a time (or small group)
- Write the simplest code that makes the test pass
- Don't over-engineer or add features not covered by tests
Iterate rapidly:
- Run tests frequently:
pnpm test WIP - For type tests:
pnpm test:types WIP - Fix failures immediately
- Keep the feedback loop tight
- Run tests frequently:
Continue until all phase tests pass:
- All tests in
tests/unit/WIP/should be green - No shortcuts - every test must pass
- All tests in
Refactor with confidence:
- Once tests pass, improve code quality
- Tests act as a safety net
- Re-run tests after each refactor
Purpose: Let tests drive the implementation, ensuring you build exactly what's needed.
Step 5: CLOSE OUT
Verify completeness, check for regressions, and finalize the phase.
🚨 CRITICAL WARNING: DO NOT MIGRATE TESTS AUTOMATICALLY 🚨
Tests MUST remain in tests/unit/WIP/ until the user explicitly reviews and approves them. Even if the user says "closeout this phase" or "finish up" - DO NOT migrate tests. Only migrate after user says "migrate the tests" or explicitly approves migration.
Actions:
Run full test suite:
pnpm test # All runtime tests pnpm test:types # All type testsHandle any regressions:
If existing tests now fail:
- STOP and think deeply - understand WHY the test is failing, not just the error message
- Document the regression in the log file under
## Regressions Found - Determine root cause:
- Is your implementation incorrect?
- Does the existing test need updating (only if requirements changed)?
- Is there a side effect you didn't anticipate?
- Fix the root cause, not just the symptom
- Re-run all tests to confirm fix
Update the log file:
Add a
## Phase Completionsection with:- Date and time completed
- Final test count (passing/total)
- Any notable issues or decisions made
- Tests location:
tests/unit/WIP/(awaiting user review)
Report completion to user:
Inform the user that the phase is complete with a summary of:
- What was implemented
- Test coverage added
- Tests are in
tests/unit/WIP/awaiting review - Any important notes or caveats
CRITICAL: Tests remain in WIP directory until user reviews:
- DO NOT migrate tests automatically
- Tests MUST stay in
tests/unit/WIP/until the user has reviewed and approved them - Only after explicit user approval should tests be migrated
- This allows the user to:
- Review test quality and coverage
- Verify test patterns are correct
- Ensure tests match requirements
- Request changes before tests become permanent
Test migration (only after user approval):
When the user approves the tests:
- Think carefully about the right permanent location for each test
- Consider if a new subdirectory is needed in the test structure
- Move tests from
tests/unit/WIP/to their permanent homes - Delete the
tests/unit/WIP/directory - Rerun tests to ensure nothing broke during migration
- Update the log file with final test locations
Purpose: Ensure quality, prevent regressions, and properly integrate work into the codebase.
Testing Best Practices
General Principles
Prefer real implementations over mocks: Only mock external dependencies (APIs, file system, databases). Keep internal code integration real.
Use realistic test data: Mirror actual usage patterns. If your function processes user objects, use realistic user data in tests.
One behavior per test: Each
it()block should test a single specific behavior. This makes failures easier to diagnose.Tests should be deterministic: Same input = same output, every time. Avoid depending on current time, random values, or external state unless that's what you're testing.
Keep tests independent: Each test should be able to run in isolation. Use
beforeEach()for setup, not shared variables.Test the contract, not the implementation: If you change HOW something works but it still behaves the same, tests shouldn't break.
Error Handling
Prioritize fixing source code over changing tests: When tests fail, your first instinct should be to fix the implementation to meet the test's expectation, not to change the test to match the implementation.
Understand failures deeply: Don't just read the error message - understand WHY the test is failing. Use debugging, logging, or step through the code if needed.
Document complex test scenarios: If a test needs explanation, add a comment describing what scenario it's covering and why it matters.
Performance
Keep unit tests fast: Unit tests should run in milliseconds. If a test is slow, it's likely testing too much or hitting external resources.
Separate fast and slow tests: Integration tests can be slower. Keep them in separate files (e.g.,
*.fast.test.tsvs*.test.ts).Use focused test runs during development: Don't run the entire suite on every change. Use glob patterns to run just what you're working on.
Type Testing Specifics
Always test the positive case: Verify that valid types are accepted and produce the expected result type.
Test the negative case when relevant: Use
@ts-expect-errorto verify that invalid types are properly rejected.Test edge cases in type logic: Empty objects,
never,unknown, union types, etc.Keep type tests close to runtime tests: When testing a function with both runtime and type tests, keep them in the same file within the same
describeblock for cohesion.
Common Patterns and Examples
Testing Error Cases
it("should throw error for invalid input", () => {
expect(() => parseConfig("invalid")).toThrow("Invalid config format");
});
it("should return error result for invalid type", () => {
const result = safeParseConfig("invalid");
expect(result.success).toBe(false);
if (!result.success) {
expect(result.error).toContain("Invalid config");
}
});
Testing Async Functions
it("should resolve with data on success", async () => {
const result = await fetchUser(123);
expect(result.id).toBe(123);
expect(result.name).toBeDefined();
});
it("should reject with error on failure", async () => {
await expect(fetchUser(-1)).rejects.toThrow("User not found");
});
Testing Type Narrowing
it("should narrow type based on discriminant", () => {
type Result = { success: true; data: string } | { success: false; error: string };
const handleResult = (result: Result) => {
if (result.success) {
type Test = Expect<Equal<typeof result, { success: true; data: string }>>;
return result.data;
} else {
type Test = Expect<Equal<typeof result, { success: false; error: string }>>;
return result.error;
}
};
});
Quick Reference
Commands
# Runtime tests
pnpm test # Run all runtime tests
pnpm test path/to/test # Run specific test file
pnpm test WIP # Run only WIP tests
pnpm test --exclude WIP # Run all except WIP (regression check)
pnpm test:watch # Run in watch mode
pnpm test:ui # Run with UI
# Type tests
pnpm test:types # Run all type tests
pnpm test:types GLOB # Run type tests matching pattern
pnpm test:types WIP # Run only WIP type tests
# Common patterns during development
pnpm test utils # Test all utils
pnpm test:types utils # Type test all utils
Test Quality Checklist
Before considering tests complete, verify:
- All exported functions have runtime tests
- Functions with complex types have type tests
- Happy path is tested
- Edge cases are covered (empty, null, undefined, boundaries)
- Error conditions are tested
- Tests are independent (can run in any order)
- Tests are deterministic (consistent results)
- Test names clearly describe what's being tested
- No regressions in existing tests
- Tests run quickly (unit tests < 100ms per test)
Phase Completion Checklist
Before closing out a phase:
- SNAPSHOT captured
- Log file created with starting position
- Tests written in
tests/unit/WIP/ - Tests initially failed (proving validity)
- Implementation completed
- All WIP tests passing
- Full test suite run (no regressions)
- Log file updated with completion notes
- Tests remain in
tests/unit/WIP/(DO NOT migrate automatically) - User notified that tests are in WIP awaiting review
After user review and approval:
- Tests migrated from WIP to permanent locations
-
tests/unit/WIP/directory removed - Log file updated with final test locations
- Tests verified to pass in new locations
Summary
Effective testing requires understanding what to test, how to test it, and when to use different testing approaches:
- Type utilities → Type tests only
- Simple functions → Runtime tests (minimum)
- Complex functions → Both runtime and type tests
- Classes → Primarily runtime tests, add type tests for complex generics
Follow TDD principles: write tests first, implement to pass them, then refactor with confidence. Keep tests fast, focused, and independent.
For phase-based development, use the five-step workflow: SNAPSHOT → CREATE LOG → WRITE TESTS → IMPLEMENTATION → CLOSE OUT. This ensures comprehensive test coverage, prevents regressions, and maintains clear documentation of your progress.
When tests fail, understand why before fixing. Prioritize fixing implementation over changing tests, unless the test itself was wrong.