Claude Code Plugins

Community-maintained marketplace

Feedback

This skill should be used when the user asks to "write tests", "test strategy", "coverage", "unit test", "integration test", or needs testing guidance. Provides testing methodology and patterns.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name Testing Patterns
description This skill should be used when the user asks to "write tests", "test strategy", "coverage", "unit test", "integration test", or needs testing guidance. Provides testing methodology and patterns.
version 0.1.0
Provide testing patterns and strategies for comprehensive test coverage and maintainable test suites. Test individual functions/methods in isolation Single function, class, or module Fast, isolated, deterministic Business logic, utility functions, transformations Test interaction between components Multiple components working together Slower, may use real dependencies API endpoints, database operations, service interactions Test complete user workflows Full application stack Slowest, tests real user scenarios Critical user journeys, smoke tests Three-phase test structure for clear test organization Arrange: Set up test data and preconditions user = User.new(name: "John") cart = ShoppingCart.new(user)

Act: Execute the code under test

total = cart.calculate_total

Assert: Verify expected outcomes

assert_equal 0, total Separates setup, execution, and verification into distinct phases

BDD-style test structure focusing on behavior Given: Initial context (preconditions) given_a_user_with_an_empty_cart

When: Action or trigger

when_the_user_calculates_total

Then: Expected outcome

then_the_total_should_be_zero Emphasizes business behavior over technical implementation

Provide canned responses for dependencies api_client = stub( fetch_user: { id: 1, name: "John" } ) Replace slow/unreliable dependencies Verify interactions occurred with dependencies email_service = mock() email_service.expect(:send_email, args: ["user@example.com", "Welcome"]) user_service.register(email_service) email_service.verify Ensure methods called with correct arguments Record calls while using real implementation logger = spy(Logger.new) service.process(logger) assert_called logger, :log, with: "Processing complete" Verify side effects without changing behavior Working implementation suitable for testing class FakeDatabase def initialize @data = {} end

def save(key, value) @data[key] = value end

def find(key) @data[key] end end In-memory database, fake file system

Test names that clearly describe scenario and outcome test_calculateTotal_withEmptyCart_returnsZero test_calculateTotal_withMultipleItems_returnsSumOfPrices test_calculateTotal_withDiscount_appliesDiscountCorrectly Format: test_[method]_[scenario]_[expected_result] BDD-style naming that reads like natural language calculateTotal_should_returnZero_when_cartIsEmpty calculateTotal_should_applyDiscount_when_couponIsValid calculateTotal_should_throwError_when_pricesAreNegative Format: [method]_should_[expected_behavior]_when_[condition] Test happy path first Start with the normal, expected flow before edge cases test_userLogin_withValidCredentials_succeeds test_userLogin_withInvalidPassword_fails test_userLogin_withLockedAccount_fails Test edge cases Test boundary conditions and limits Empty inputs, maximum values, null values, zero values, negative numbers Test error cases Verify error handling paths work correctly Invalid inputs, network failures, permission errors, timeout scenarios Isolate tests Each test should be independent Use setup/teardown to reset state def setup @database = TestDatabase.new @service = UserService.new(@database) end

def teardown @database.clear end

Make tests readable Tests serve as documentation Good: Clear and descriptive test_userRegistration_withExistingEmail_returnsError

Bad: Unclear purpose

test_user_reg_1

One assertion per concept Each test should verify one logical concept Good: Single concept test_userCreation_setsDefaultRole user = create_user assert_equal "member", user.role end

Avoid: Multiple unrelated assertions

test_userCreation user = create_user assert_equal "member", user.role assert_not_nil user.email assert_true user.active end

Use test fixtures and factories Extract common test data setup Create reusable test data def create_test_user(overrides = {}) defaults = { name: "Test User", email: "test@example.com", role: "member" } User.new(defaults.merge(overrides)) end Avoid magic numbers Use named constants for test values Good VALID_USER_AGE = 25 MINIMUM_AGE = 18 test_userValidation_withValidAge_succeeds user = User.new(age: VALID_USER_AGE) assert user.valid? end

Bad

test_userValidation_withValidAge_succeeds user = User.new(age: 25) assert user.valid? end

Test corner cases Test unusual combinations and scenarios Concurrent access, timezone edge cases, leap years, DST transitions
Percentage of code lines executed during tests Measures which lines of code are exercised Percentage of code branches (if/else, switch) taken during tests More thorough than line coverage as it measures decision paths Percentage of functions/methods called during tests Identifies untested functions Aim for high coverage but prioritize meaningful tests over coverage numbers 80%+ coverage is a good target for critical code paths 100% coverage does not guarantee bug-free code Focus on testing behavior, not achieving coverage metrics Testing implementation details instead of behavior Focus on testing observable behavior and outcomes, not internal implementation details. Test what the code does, not how it does it. Over-mocking dependencies throughout test suites Use real implementations where practical; excessive mocking often indicates poor design. Only mock external dependencies or slow operations. Tests that sometimes pass and sometimes fail Ensure tests are deterministic by controlling time, randomness, and async operations. Use fixed timestamps, seeded random generators, and proper async handling. Tests that take too long to run Use unit tests for fast feedback; reserve slow integration/e2e tests for critical paths. Unit tests should run in milliseconds, not seconds. Tests that depend on execution order or shared state Make each test independent with proper setup/teardown and isolated state. Each test should create its own test data.