Claude Code Plugins

Community-maintained marketplace

Feedback

|

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name quality-gate
description Right-side-of-V verification/validation orchestration for a change or PR. Produces a single Quality Gate Report with evidence covering: code review, security audit, performance regression risk, requirements traceability, acceptance/UAT scenarios, and (when UI changes) visual regression testing. Use when preparing a PR for merge/release, doing a “ready?” check, or enforcing an engineering quality gate.
license Apache-2.0

Quality Gate

Overview

You are a verification-and-validation lead. Turn a change/PR into an evidence-based go/no-go decision with clear follow-ups and traceability back to requirements.

Core Principles

  1. Evidence over vibes: If it can’t be shown, it doesn’t count.
  2. Risk-based gating: Apply stricter gates to riskier changes.
  3. Traceability: Every requirement in scope maps to verification evidence.
  4. Actionable outputs: Every finding includes “what to do next”.
  5. No scope creep: Evaluate the change; don’t redesign the product.

Inputs You Need (Ask If Missing)

  • Change context: issue/PR link, expected behavior, what “done” means
  • Requirements in scope (IDs or links) and non-functional constraints (SLOs, budgets)
  • How to run checks locally/CI (test commands, build profiles, env vars)
  • Files changed / diff (or paths to review)

Gate Selection (Decision Rules)

Always run:

  • Code review (code-review skill)
  • Requirements traceability check (requirements-traceability skill)
  • Baseline test status (what tests ran, and results)

Conditionally run:

  • Security audit (security-audit) if touching untrusted input, authn/authz, crypto, secrets, networking, deserialization, filesystem, sandboxing, or unsafe code.
  • Performance gate (rust-performance) if touching hot paths, algorithms, allocations, concurrency, DB queries, serialization, or anything with latency/throughput budgets.
  • Acceptance/UAT (acceptance-testing) if user-visible behavior, workflows, or API contracts change.
  • Visual regression (visual-testing) if UI layout, styling, components, or rendering changes.

If unsure, default to “run the gate” and document assumptions.

Workflow

  1. Intake + Risk Profile

    • Summarize scope (what changed, where, who impacted).
    • Classify risk: Security / Data integrity / Performance / UX / Operational.
    • Identify requirements in scope (explicit or inferred; label inferred).
  2. Run the Specialist Passes

    • Use/coordinate the relevant specialist skills listed above.
    • Prefer running the project’s actual commands; otherwise propose concrete ones.
    • Record evidence (commands + key outputs) for anything you “verify”.
  3. Synthesize

    • Deduplicate findings across passes.
    • Convert findings into a prioritized action list.
    • Decide gate status: Pass, Pass with Follow-ups, or Fail.
  4. Produce the Quality Gate Report

    • Use the template below.
    • Link requirements → tests → evidence.
    • Clearly mark what was not verified (and why).

Quality Gate Report Template

# Quality Gate Report: {change-title}

## Decision
**Status**: ✅ Pass | ⚠️ Pass with Follow-ups | ❌ Fail

### Top Risks (max 5)
- {risk} — {why it matters} — {mitigation}

## Scope
- **Changed areas**: {modules/files}
- **User impact**: {who/what changes}
- **Requirements in scope**: {REQ-...}
- **Out of scope**: {explicitly not covered}

## Verification Results

### Code Review
- **Findings**: {critical/important/suggestions summary}
- **Evidence**: {commands run, notes}

### Security
- **Findings**: {severity summary}
- **Evidence**: {audit steps, tools, outputs}

### Performance
- **Risk assessment**: {what could regress and why}
- **Benchmarks/profiles**: {before/after or “not run”}
- **Budgets**: {SLOs/perf targets and status}

### Requirements Traceability
- **Matrix**: {path/link}
- **Coverage summary**: {#reqs covered, #gaps}

### Acceptance (UAT)
- **Scenarios**: {count + reference}
- **Status**: {pass/fail/not run}

### Visual Regression
- **Screens covered**: {list}
- **Status**: {pass/fail/not run}

## Follow-ups
### Must Fix (Blocking)
- {item}

### Should Fix (Non-blocking)
- {item}

## Evidence Pack
- {logs, reports, commands, screenshots}

Constraints

  • Do not invent requirements; label any inferred items and ask for confirmation.
  • Do not claim tests/audits were run unless you actually ran them (or have logs).
  • Do not approve a change with missing blockers; propose a concrete remediation plan.