| name | quality-gate |
| description | Synthesize all quality assessments (risk, test-design, traceability, NFR) into evidence-based gate decision (PASS/CONCERNS/FAIL/WAIVED) with comprehensive rationale. Generates both YAML (CI/CD) and Markdown (human review) reports with action items. Use during final quality review to make go/no-go deployment decisions based on comprehensive quality evidence. |
| version | 2 |
| category | Quality |
| acceptance | [object Object] |
| inputs | [object Object] |
| outputs | [object Object] |
| telemetry | [object Object] |
Quality Gate Decision
The quality-gate skill performs the final quality gate assessment by synthesizing all previous quality evaluations and making an evidence-based decision on whether the implementation is ready to proceed (merge, deploy, release). This skill integrates risk management, test coverage, requirements traceability, and non-functional requirements into a comprehensive quality score across 6 dimensions, then applies configurable criteria to make a gate decision with clear rationale.
Unlike individual quality assessments that evaluate specific aspects (risks, tests, NFRs), the quality gate provides a holistic view of overall quality by weighting and combining all assessments. The gate produces two outputs: a YAML report for CI/CD automation and a markdown report for human review, enabling both automated and manual quality processes. Decisions include PASS (all criteria met), CONCERNS (some issues but can proceed), FAIL (critical issues block proceeding), or WAIVED (issues acknowledged and waived with justification).
The assessment evaluates 6 quality dimensions with configurable weights: Risk Management (25%), Test Coverage (20%), Traceability (20%), NFR (25%), Implementation Quality (5%), and Compliance (5%). Each dimension is scored independently, then combined into an overall quality score. Critical gap rules (e.g., no critical security gaps) and dimension minimums (e.g., traceability ≥65%) are applied to determine if the gate passes.
When to Use This Skill
This skill should be used when:
- Final quality decision is needed before merge/deploy/release
- All quality assessments need to be synthesized into overall quality score
- Audit trail for compliance documentation is required
- Quality checks need to be integrated into CI/CD pipeline
- Quality waivers need to be tracked with justification
- Determination of whether implementation meets quality standards is needed
This skill is particularly valuable:
- After all quality assessments complete (risk, test-design, trace, nfr)
- As final checkpoint before code merge or deployment
- For regulatory compliance requiring audit trail
- When automated quality gates are part of CI/CD
- For tracking quality trends over time (score history)
This skill should NOT be used when:
- Quality assessments haven't been performed yet (run assessments first)
- Task is in early planning stage (no implementation to assess)
- You only need specific assessment (use individual assessment skills instead)
Prerequisites
Before running quality-gate, ensure you have:
- Task specification file with implementation details
- Project configuration (.claude/config.yaml) with gate criteria
- At least one quality assessment completed:
- Minimum: Traceability OR NFR (functional or quality attributes)
- Recommended: All 4 assessments (risk, test-design, traceability, nfr)
- Waiver requests documented (if any gaps require waiving)
Optional but recommended:
- All 4 quality assessments for comprehensive gate
- Gate criteria configured in project config
- CI/CD pipeline configured to consume YAML output
Sequential Quality Gate Process
This skill executes through 9 sequential steps (Step 0 + Steps 1-8). Each step must complete successfully before proceeding. The process systematically synthesizes all quality dimensions, calculates scores, applies criteria, and makes the final gate decision.
Step 0: Load Configuration and Assessments
Purpose: Load project configuration, task specification, and all available quality assessment reports. Determine which assessments are available and verify minimum requirements are met.
Actions:
- Load project configuration from
.claude/config.yaml(gate criteria, dimension weights, thresholds) - Read task specification file (extract task ID, title, implementation status)
- Find and load latest quality assessments for this task:
- Risk profile:
.claude/quality/assessments/{task-id}-risk-{latest-date}.md - Test design:
.claude/quality/assessments/{task-id}-test-design-{latest-date}.md - Traceability:
.claude/quality/assessments/{task-id}-trace-{latest-date}.md - NFR assessment:
.claude/quality/assessments/{task-id}-nfr-{latest-date}.md
- Risk profile:
- Extract key metrics from each available assessment (scores, gaps, recommendations)
- Check minimum assessment requirements (at least traceability OR nfr required)
- Load any waiver requests from task file
- Prepare output paths for YAML and Markdown reports
Halt If:
- Config file missing or invalid
- Task file not found
- Minimum assessments not available (require at least 1 of: traceability or NFR)
- Cannot create output directory
Output: Configuration summary, task details, assessments available, minimum requirements check
See: references/templates.md#step-0-configuration-loading-output for complete format
See: references/gate-dimensions.md for dimension definitions
Step 1: Synthesize Risk Management Dimension
Purpose: Evaluate risk management effectiveness from risk profile assessment. Calculate risk management score based on mitigation coverage, test coverage for high risks, and risk score distribution.
Actions:
- Load risk profile data (if available): total risks, critical/high/medium/low counts, mitigation status, test coverage
- Calculate risk management score using formula:
Risk Management Score = ( (Mitigation Coverage × 0.5) + (Test Coverage for High Risks × 0.3) + (Risk Score Distribution × 0.2) ) - Determine risk management status (PASS ≥75%, CONCERNS 50-74%, FAIL <50%)
- Identify risk management gaps (critical risks without mitigation, high risks without tests)
- Calculate weighted contribution to overall gate (score × 25% weight)
If Risk Profile Not Available:
- Mark dimension as N/A
- Redistribute weight to other dimensions
- Note assumption of acceptable risk management
Output: Risk management score, status, risk counts, mitigation/test coverage, weighted contribution
See: references/templates.md#step-1-risk-management-dimension-output and gate-dimensions.md#risk-management
Step 2: Synthesize Test Coverage Dimension
Purpose: Evaluate test strategy and coverage from test-design assessment. Calculate test coverage score based on P0 tests passing, P1 tests implemented, and overall coverage percentage.
Actions:
- Load test design data (if available): total tests by priority (P0/P1/P2), tests by level (unit/integration/e2e), implementation status, pass status, coverage percentage
- Calculate test coverage score using formula:
Test Coverage Score = ( (P0 Tests Passing × 0.5) + (P1 Tests Implemented × 0.3) + (Coverage Percentage × 0.2) ) - Determine test coverage status (PASS ≥80%, CONCERNS 65-79%, FAIL <65%)
- Identify test coverage gaps (P0 tests not passing, P1 tests not implemented, coverage below threshold)
- Calculate weighted contribution to overall gate (score × 20% weight)
If Test Design Not Available:
- Mark dimension as N/A
- Fallback: Use test coverage from traceability if available
- Note assumption that tests exist
Output:
✓ Test Coverage dimension synthesized
✓ Score: {score}% ({status})
✓ P0 Tests: {passing}/{total} passing
✓ P1 Tests: {implemented}/{total} implemented
✓ Coverage: {coverage}%
✓ Contribution to Gate: {weighted_score} points (20% weight)
Reference: See gate-dimensions.md for test coverage scoring methodology.
Step 3: Synthesize Traceability Dimension
Purpose: Evaluate requirements traceability from traceability assessment. Calculate traceability score based on implementation coverage, test coverage, and gap coverage.
Actions:
- Load traceability data (if available): total AC, implementation status (implemented/partial/not implemented), test status (tested/partial/not tested), gaps by severity
- Calculate traceability score using formula:
Where Gap Coverage = 100% - (Critical×30% + High×20% + Medium×10%)Traceability Score = ( (Implementation Coverage × 0.5) + (Test Coverage × 0.4) + (Gap Coverage × 0.1) ) - Determine traceability status (PASS ≥80%, CONCERNS 65-79%, FAIL <65%)
- Identify traceability gaps (AC not implemented, AC not tested, missing evidence)
- Calculate weighted contribution to overall gate (score × 20% weight)
If Traceability Not Available:
- If NFR available: Proceed (NFR covers quality attributes)
- If NFR also missing: Halt (minimum requirement not met)
Output:
✓ Traceability dimension synthesized
✓ Score: {score}% ({status})
✓ Implementation Coverage: {impl_coverage}%
✓ Test Coverage: {test_coverage}%
✓ Gaps: {critical}/{high}/{medium} (critical/high/medium)
✓ Contribution to Gate: {weighted_score} points (20% weight)
Reference: See gate-dimensions.md for traceability scoring formula.
Step 4: Synthesize NFR Dimension
Purpose: Evaluate non-functional requirements from NFR assessment. Use overall NFR score directly, check for critical gaps in security/reliability, and apply special rules for production blockers.
Actions:
- Load NFR data (if available): overall score, category scores (security, performance, reliability, maintainability, scalability, usability), gaps by severity and category
- NFR score is direct: Overall NFR Score from assessment (no additional calculation)
- Determine NFR status (PASS ≥75%, CONCERNS 60-74%, FAIL <60%)
- Apply special rules:
- Security <50%: FAIL (production blocker)
- Reliability <50%: FAIL (production blocker)
- Critical security gap: CONCERNS or FAIL
- Critical reliability gap: CONCERNS
- Identify NFR gaps (critical gaps by category, categories below 50%, high-severity gaps)
- Calculate weighted contribution to overall gate (score × 25% weight)
If NFR Not Available:
- If Traceability available: Proceed (traceability covers functional correctness)
- If Traceability also missing: Halt (minimum requirement not met)
- Fallback: Use implementation quality as proxy
Output:
✓ NFR dimension synthesized
✓ Overall NFR Score: {score}% ({status})
✓ Security: {security_score}% ({security_status})
✓ Reliability: {reliability_score}% ({reliability_status})
✓ Critical NFR Gaps: {count} (Security: {sec}, Reliability: {rel})
✓ Contribution to Gate: {weighted_score} points (25% weight)
Reference: See gate-dimensions.md for NFR dimension scoring and special rules.
Step 5: Synthesize Implementation Quality Dimension
Purpose: Evaluate implementation quality from task specification and implementation record. Calculate implementation quality score based on task completion, testing, documentation, and code review status.
Actions:
- Extract implementation data from task specification: status (Draft/In Progress/Review/Complete), files changed, implementation record (completion, testing, documentation)
- Calculate implementation quality score using formula:
Implementation Quality Score = ( (Task Completion × 0.4) + (Testing × 0.3) + (Documentation × 0.2) + (Code Review × 0.1) ) - Determine implementation quality status (PASS ≥70%, CONCERNS 50-69%, FAIL <50%)
- Calculate weighted contribution to overall gate (score × 5% weight)
Output:
✓ Implementation Quality dimension synthesized
✓ Score: {score}% ({status})
✓ Task Completion: {completion}%
✓ Testing: {testing}%
✓ Documentation: {documentation}%
✓ Contribution to Gate: {weighted_score} points (5% weight)
Reference: See gate-dimensions.md for implementation quality scoring.
Step 6: Synthesize Compliance Dimension
Purpose: Evaluate compliance with standards, policies, and regulatory requirements. Check if required compliance activities (security review, privacy review, etc.) have been completed.
Actions:
- Check compliance requirements from config: standards (OWASP, WCAG, REST API best practices), policies (security review, privacy review, legal review)
- Evaluate compliance: For each requirement, check if assessment addressed it and extract compliance status
- Calculate compliance score: (Requirements Met / Total Requirements) × 100%
- Determine compliance status (PASS 100%, CONCERNS 90-99%, FAIL <90%)
- Calculate weighted contribution to overall gate (score × 5% weight)
Output:
✓ Compliance dimension synthesized
✓ Score: {score}% ({status})
✓ Standards Met: {count}/{total}
✓ Policies Met: {count}/{total}
✓ Contribution to Gate: {weighted_score} points (5% weight)
Reference: See gate-dimensions.md for compliance evaluation methodology.
Step 7: Calculate Overall Quality Score and Make Decision
Purpose: Synthesize all dimensions into overall quality score, apply gate criteria, evaluate waivers, and make final gate decision.
Actions:
- Calculate overall quality score: Sum of all weighted dimension scores (target: 100%)
- Apply quality gate criteria from config:
- Overall score threshold (default: ≥75%)
- Dimension minimums (risk ≥50%, test ≥65%, trace ≥65%, nfr ≥60%, impl ≥50%, compliance ≥90%)
- Critical gap rules (no critical security gaps, max 1 critical reliability gap)
- Evaluate each criterion: Check if overall score meets threshold, all dimensions meet minimums, critical gap rules satisfied
- Determine preliminary gate decision:
- FAIL: Security/reliability <50%, critical security gap, overall <60%
- CONCERNS: Overall <75%, dimension below minimum, or critical gaps
- PASS: All criteria met
- Check waiver requests: If any gaps waived with justification, remove from critical count and re-evaluate decision
- Finalize decision with reasoning: Document status, overall score, criteria met/not met, blockers, action items
Output: Overall score, gate decision (PASS/CONCERNS/FAIL/WAIVED), criteria met, critical issues, waivers, action items
See: references/templates.md#step-7-overall-quality-score-and-decision and gate-decision-logic.md
Step 8: Generate Quality Gate Reports and Present Summary
Purpose: Generate structured YAML (for CI/CD automation) and human-readable Markdown (for review) reports, then present concise summary to user.
Actions:
- Generate YAML report:
- Task metadata (ID, title, type)
- Assessment metadata (date, assessor, available assessments)
- Scores (overall score, dimension scores with weights and status)
- Decision (status, confidence, reasoning, blockers, action items)
- Waivers (gap ID, justification, approver, date)
- Audit trail (timestamp, action, result, assessor)
- Generate Markdown report using template:
- Populate template from
.claude/templates/quality-gate.md - Include all dimension details, scores, gaps, action items
- Format for human readability with clear sections
- Populate template from
- Present summary to user:
- Overall quality score and gate decision
- Dimension scores with status indicators
- Reasoning for decision
- Critical issues and blockers (if any)
- Action items with priorities
- Waivers granted (if any)
- Next steps (complete action items, proceed with merge, etc.)
- Emit telemetry event with all metrics
Output: Assessment complete summary with overall score, dimension scores, gate decision, reasoning, reports generated (YAML/Markdown), next steps
See: references/templates.md#step-8-report-formats for complete YAML and Markdown report structures
See: gate-examples.md for example reports and summaries
Integration with Other Skills
Integration with Individual Assessments
Inputs from assessments:
- risk-profile provides risk scores, mitigation status, high-risk test coverage → feeds Risk Management dimension
- test-design provides test counts by priority, implementation/pass status, coverage → feeds Test Coverage dimension
- trace-requirements provides AC implementation/test coverage, gaps → feeds Traceability dimension
- nfr-assess provides overall NFR score, category scores, critical gaps → feeds NFR dimension
Sequential workflow:
1. Run risk-profile (optional)
2. Run test-design (optional)
3. Run trace-requirements (required*)
4. Run nfr-assess (required*)
5. Run quality-gate (synthesizes all)
*At least one of trace-requirements or nfr-assess required
Integration with CI/CD
YAML output enables automation:
- CI/CD pipeline reads
.claude/quality/gates/{task-id}-gate-{date}.yaml - Checks
gate.decision.statusfield - FAIL → Block PR merge, fail build
- CONCERNS → Post warning, require review
- PASS → Allow merge, proceed
- WAIVED → Log waiver, allow merge with tracking
See: references/templates.md#integration-examples and gate-integration.md for CI/CD integration patterns
Best Practices
- Run all 4 assessments for comprehensive evaluation (minimum: 1 of traceability or NFR)
- Set clear thresholds upfront in config before running gate
- Use waivers judiciously with clear justification and approval
- Automate in CI/CD using YAML output for consistent standards
- Track action items from CONCERNS decisions to ensure follow-up
- Document decisions clearly with detailed reasoning (especially FAIL/WAIVED)
- Re-run after fixes to validate improvements and track score history
- Configure weights appropriately based on project type (e.g., increase Security weight for critical projects)
References
- templates.md - All output formats, config examples, dimension synthesis outputs, decision templates, report structures
- gate-dimensions.md - Scoring methodology for 6 dimensions with formulas and examples
- gate-decision-logic.md - Decision matrix, criteria evaluation, critical gap rules, waiver logic
- gate-integration.md - CI/CD integration, waiver process, audit trail, YAML schema
- gate-examples.md - Example reports (PASS/CONCERNS/FAIL/WAIVED), outputs, summaries
Quality Gate Decision skill - Version 2.0 - Minimal V2 Architecture