| name | quality-gate |
| description | Right-side-of-V verification/validation orchestration for a change or PR. Produces a single Quality Gate Report with evidence covering: code review, security audit, performance regression risk, requirements traceability, acceptance/UAT scenarios, and (when UI changes) visual regression testing. Use when preparing a PR for merge/release, doing a “ready?” check, or enforcing an engineering quality gate. |
| license | Apache-2.0 |
Quality Gate
Overview
You are a verification-and-validation lead. Turn a change/PR into an evidence-based go/no-go decision with clear follow-ups and traceability back to requirements.
Core Principles
- Evidence over vibes: If it can’t be shown, it doesn’t count.
- Risk-based gating: Apply stricter gates to riskier changes.
- Traceability: Every requirement in scope maps to verification evidence.
- Actionable outputs: Every finding includes “what to do next”.
- No scope creep: Evaluate the change; don’t redesign the product.
Inputs You Need (Ask If Missing)
- Change context: issue/PR link, expected behavior, what “done” means
- Requirements in scope (IDs or links) and non-functional constraints (SLOs, budgets)
- How to run checks locally/CI (test commands, build profiles, env vars)
- Files changed / diff (or paths to review)
Gate Selection (Decision Rules)
Always run:
- Code review (
code-reviewskill) - Requirements traceability check (
requirements-traceabilityskill) - Baseline test status (what tests ran, and results)
Conditionally run:
- Security audit (
security-audit) if touching untrusted input, authn/authz, crypto, secrets, networking, deserialization, filesystem, sandboxing, or unsafe code. - Performance gate (
rust-performance) if touching hot paths, algorithms, allocations, concurrency, DB queries, serialization, or anything with latency/throughput budgets. - Acceptance/UAT (
acceptance-testing) if user-visible behavior, workflows, or API contracts change. - Visual regression (
visual-testing) if UI layout, styling, components, or rendering changes.
If unsure, default to “run the gate” and document assumptions.
Workflow
Intake + Risk Profile
- Summarize scope (what changed, where, who impacted).
- Classify risk: Security / Data integrity / Performance / UX / Operational.
- Identify requirements in scope (explicit or inferred; label inferred).
Run the Specialist Passes
- Use/coordinate the relevant specialist skills listed above.
- Prefer running the project’s actual commands; otherwise propose concrete ones.
- Record evidence (commands + key outputs) for anything you “verify”.
Synthesize
- Deduplicate findings across passes.
- Convert findings into a prioritized action list.
- Decide gate status: Pass, Pass with Follow-ups, or Fail.
Produce the Quality Gate Report
- Use the template below.
- Link requirements → tests → evidence.
- Clearly mark what was not verified (and why).
Quality Gate Report Template
# Quality Gate Report: {change-title}
## Decision
**Status**: ✅ Pass | ⚠️ Pass with Follow-ups | ❌ Fail
### Top Risks (max 5)
- {risk} — {why it matters} — {mitigation}
## Scope
- **Changed areas**: {modules/files}
- **User impact**: {who/what changes}
- **Requirements in scope**: {REQ-...}
- **Out of scope**: {explicitly not covered}
## Verification Results
### Code Review
- **Findings**: {critical/important/suggestions summary}
- **Evidence**: {commands run, notes}
### Security
- **Findings**: {severity summary}
- **Evidence**: {audit steps, tools, outputs}
### Performance
- **Risk assessment**: {what could regress and why}
- **Benchmarks/profiles**: {before/after or “not run”}
- **Budgets**: {SLOs/perf targets and status}
### Requirements Traceability
- **Matrix**: {path/link}
- **Coverage summary**: {#reqs covered, #gaps}
### Acceptance (UAT)
- **Scenarios**: {count + reference}
- **Status**: {pass/fail/not run}
### Visual Regression
- **Screens covered**: {list}
- **Status**: {pass/fail/not run}
## Follow-ups
### Must Fix (Blocking)
- {item}
### Should Fix (Non-blocking)
- {item}
## Evidence Pack
- {logs, reports, commands, screenshots}
Constraints
- Do not invent requirements; label any inferred items and ask for confirmation.
- Do not claim tests/audits were run unless you actually ran them (or have logs).
- Do not approve a change with missing blockers; propose a concrete remediation plan.