Claude Code Plugins

Community-maintained marketplace

Feedback

build-test-report

@cowwoc/styler
2
0

Run Maven build and tests atomically (60-75% overhead reduction)

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name build-test-report
description Run Maven build and tests atomically (60-75% overhead reduction)
allowed-tools Bash

Build Test Report Skill

Purpose: Run Maven build and tests in a single atomic operation with structured reporting, reducing LLM round-trips from 5-7 to 2-3.

Performance: 60-75% overhead reduction (note: build/test time unchanged, but coordination overhead reduced)

When to Use This Skill

✅ Use build-test-report When:

  • Running routine build verification
  • Executing test suites after code changes
  • Validating changes before commit or merge
  • Performing CI/CD-style checks locally
  • Need structured output for programmatic processing
  • Want to reduce waiting time for build feedback

❌ Do NOT Use When:

  • Need to debug specific tests (use Maven directly for better output)
  • Running long-running integration tests (may timeout)
  • Need interactive Maven plugins (this runs non-interactively)
  • Building multiple modules separately with different configs
  • Need to inspect detailed Maven output line-by-line

Performance Comparison

Traditional Workflow (5-7 LLM round-trips, 30-50s overhead)

[LLM Round 1] Run compile
  → Bash: ./mvnw clean compile

[LLM Round 2] Check compile output
  → Parse output for errors
  → Decide if tests should run

[LLM Round 3] Run tests
  → Bash: ./mvnw test

[LLM Round 4] Check test output
  → Parse test results
  → Count failures

[LLM Round 5] Extract specific errors
  → Bash: grep for error messages

[LLM Round 6] Report to user
  → Summarize build and test results

Total Overhead: 30-50 seconds (plus build/test time) LLM Round-trips: 5-7

Optimized Workflow (2-3 LLM round-trips, 8-12s overhead)

[LLM Round 1] Execute build-test-report
  → Bash: build-test-report.sh
  → [Script runs build + tests + parses results]

[LLM Round 2] Report to user
  → Parse JSON output
  → Report results with error details

Total Overhead: 8-12 seconds (plus build/test time) LLM Round-trips: 2-3

Overhead Savings: 60-75% faster coordination

Usage

Basic Build and Test

# Run complete build and test suite
/workspace/main/.claude/scripts/build-test-report.sh

Compile Only (Skip Tests)

# Fast build verification without tests
/workspace/main/.claude/scripts/build-test-report.sh --skip-tests

With Maven Profile

# Use specific Maven profile
/workspace/main/.claude/scripts/build-test-report.sh --profile ci

Single Module

# Build and test specific module
/workspace/main/.claude/scripts/build-test-report.sh --module formatter

Combined Options

# Module-specific build without tests
/workspace/main/.claude/scripts/build-test-report.sh \
  --module formatter \
  --skip-tests \
  --profile dev

Output Format

Script returns structured JSON:

{
  "status": "success|error",
  "message": "Build and tests completed successfully",
  "total_duration_seconds": 45,
  "build": {
    "status": "success|failed",
    "duration_seconds": 30,
    "errors": ""
  },
  "tests": {
    "status": "success|failed|skipped",
    "duration_seconds": 15,
    "tests_run": 42,
    "tests_failed": 0,
    "tests_errors": 0,
    "tests_skipped": 3,
    "errors": ""
  },
  "working_directory": "/workspace/main",
  "timestamp": "2025-11-08T00:30:00-05:00"
}

LLM Integration Pattern

Example 1: User asks to verify changes work

User: Run the build and tests to make sure my changes work

LLM Round 1 (Execute):

/workspace/main/.claude/scripts/build-test-report.sh

Script Output:

{
  "status": "success",
  "total_duration_seconds": 45,
  "build": {"status": "success", "duration_seconds": 30},
  "tests": {
    "status": "success",
    "tests_run": 42,
    "tests_failed": 0,
    "tests_errors": 0
  }
}

LLM Round 2 (Report):

Build and tests completed successfully!

Build: ✅ Success (30s)
Tests: ✅ All passed (15s)
  - 42 tests run
  - 0 failures
  - 0 errors
  - 3 skipped

Total duration: 45 seconds

Example 2: Build fails with compilation errors

LLM Round 1 (Execute):

/workspace/main/.claude/scripts/build-test-report.sh

Script Output:

{
  "status": "error",
  "message": "Build failed - compilation errors",
  "build": {
    "status": "failed",
    "errors": "[ERROR] FormattingRule.java:[23,15] cannot find symbol..."
  }
}

LLM Round 2 (Report + Guidance):

Build failed with compilation errors:

Error in FormattingRule.java:23:
  cannot find symbol: method validate()

This appears to be a missing method. Would you like me to:
1. Add the missing method
2. Check if it's defined elsewhere
3. Review the code for the issue

Example 3: Tests fail

LLM Round 1 (Execute):

/workspace/main/.claude/scripts/build-test-report.sh

Script Output:

{
  "status": "error",
  "message": "Tests failed - 2 failures, 0 errors",
  "build": {"status": "success"},
  "tests": {
    "status": "failed",
    "tests_run": 42,
    "tests_failed": 2,
    "tests_errors": 0,
    "errors": "ValidationTest.testInvalidInput:42 expected: <true> but was: <false>..."
  }
}

LLM Round 2 (Report + Next Steps):

Build succeeded but tests failed:

Build: ✅ Success
Tests: ❌ 2 failures
  - 42 tests run
  - 2 failures
  - 0 errors

Failed Tests:
1. ValidationTest.testInvalidInput:42
   Expected: true, Actual: false

Would you like me to investigate the failing tests?

Common Use Cases

1. Pre-Commit Verification

# Before committing, verify changes compile and pass tests
build-test-report.sh

2. Quick Compile Check

# Fast verification that code compiles
build-test-report.sh --skip-tests

3. Module-Specific Testing

# Test only the module you changed
build-test-report.sh --module formatter

4. CI/CD Simulation

# Run same checks as CI pipeline
build-test-report.sh --profile ci

Error Handling

Build Failures

Script captures:

  • ✅ Compilation errors with file/line numbers
  • ✅ Dependency resolution failures
  • ✅ Plugin execution errors

Returns JSON with build.status = "failed" and error details in build.errors

Test Failures

Script captures:

  • ✅ Failed test count
  • ✅ Test error count
  • ✅ Test names and assertion messages

Returns JSON with tests.status = "failed" and details in tests.errors

Precondition Failures

Script validates:

  • ✅ Maven wrapper (mvnw) exists
  • ✅ pom.xml exists
  • ✅ Currently in project root

Returns error immediately if preconditions fail

Performance Characteristics

Overhead Reduction

Phase Traditional Optimized Savings
Pre-build coordination 10-15s 2-3s 73-80%
Build execution N seconds N seconds (unchanged)
Post-build analysis 8-12s 0s 100%
Pre-test coordination 5-8s 0s 100%
Test execution M seconds M seconds (unchanged)
Post-test analysis 7-15s 3-6s 60%
Total overhead 30-50s 5-9s 82-88%

Note: Build and test execution time (N + M seconds) is unchanged - only coordination overhead is reduced.

Frequency and Impact

Expected Usage: 5-10 times per day

Overhead Savings per Use: ~25-40 seconds

Daily Impact: 125-400 seconds (2-7 minutes)

Monthly Impact: 60-200 minutes (1-3.3 hours)

Comparison with Manual Maven

When to Use This Script

  • ✅ Routine build/test verification
  • ✅ Need structured output for reporting
  • ✅ Want automatic error extraction
  • ✅ Checking if changes work before commit

When to Use Maven Directly

  • ⚠️ Debugging specific test failures (need detailed output)
  • ⚠️ Using interactive Maven plugins
  • ⚠️ Need to see real-time output during long builds
  • ⚠️ Investigating build configuration issues

Related

  • Bash tool: For running Maven directly when debugging
  • git-squash-optimized.sh: For commit management after successful builds
  • write-and-commit.sh: For creating files to be tested
  • build-system.md: Complete Maven configuration documentation