| name | admin-dashboard-qa |
| description | Use this skill when implementing, modifying, or fixing the admin dashboard (admin-dashboard-v2). Triggers for tasks involving dashboard UI, components, pages, features, hooks, or API integration. Orchestrates a rigorous QA workflow with PM review, use case writing, testing, and bug fixing cycles. |
Admin Dashboard QA Workflow
This skill implements a rigorous multi-agent testing workflow for admin dashboard frontends. It automatically adapts to whichever dashboard version you're working on.
Auto-Detection
When this skill activates, first determine which dashboard you're working on:
- Check the task context - Does it mention a specific version?
- Check file paths - Which package directory is involved?
- Default to active version - Currently
admin-dashboard-v2
Dashboard Packages:
packages/admin-dashboard-v2- Current production version- Any future
packages/admin-dashboard-*directories
Adaptive Configuration
Before starting, detect the dashboard's test setup:
# Find the correct package
DASHBOARD_DIR="packages/admin-dashboard-v2" # or detected version
# Check for test config
ls $DASHBOARD_DIR/vitest.config.ts # Vitest
ls $DASHBOARD_DIR/jest.config.js # Jest
ls $DASHBOARD_DIR/package.json # Check test scripts
Then adapt commands accordingly:
- Vitest:
pnpm test,pnpm test:coverage - Jest:
pnpm test,pnpm test -- --coverage - Other: Check
package.jsonscripts
Workflow Overview
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ PROJECT MANAGER │────▶│ USE CASE WRITER │────▶│ TESTER │
│ (Reviews scope) │ │ (Defines tests) │ │ (Executes tests)│
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
┌──────────────────┐ │
│ DEVELOPER │◀─────────────┘
│ (Fixes bugs) │
└──────────────────┘
│
▼
┌──────────────────┐
│ FULL REGRESSION │
│ TEST │
└──────────────────┘
Phase 1: Project Manager Review
Before any implementation begins:
- Review the feature request/task description
- Define acceptance criteria
- Identify affected components
- Estimate risk level (low/medium/high)
- Create implementation checklist
Output a PM Summary like:
## PM Summary: [Feature Name]
**Risk Level:** [low/medium/high]
**Affected Components:** [list]
**Acceptance Criteria:**
- [ ] Criterion 1
- [ ] Criterion 2
**Implementation Checklist:**
- [ ] Step 1
- [ ] Step 2
Phase 2: Use Case Writer
Before testing, define comprehensive use cases by consulting TESTING-MANUAL.md:
- Identify all user flows affected
- Write test cases for happy paths
- Write test cases for edge cases
- Write test cases for error states
- Define expected outcomes
Format each use case as:
### UC-[ID]: [Use Case Name]
**Preconditions:** [Setup required]
**Steps:**
1. Step 1
2. Step 2
**Expected Result:** [What should happen]
**Test Type:** [unit/integration/e2e/manual]
Phase 3: Developer Implementation
Implement the feature following:
- Write unit tests FIRST (TDD when possible)
- Implement the feature
- Run
pnpm testto verify unit tests pass - Self-review code for common issues
- Hand off to Tester
Phase 4: Tester Execution
Execute all defined use cases:
- Run automated tests:
cd packages/admin-dashboard-v2 && pnpm test - Check coverage:
pnpm test:coverage - Execute manual test cases if needed
- Document any failures with:
- Steps to reproduce
- Expected vs actual behavior
- Screenshot/error message if applicable
- Report bugs to Developer
Test Report Format:
## Test Report: [Feature Name]
**Date:** [YYYY-MM-DD]
**Tester:** Claude AI
### Automated Tests
- Total: X
- Passed: X
- Failed: X
- Coverage: X%
### Use Case Results
| UC ID | Description | Status | Notes |
|-------|-------------|--------|-------|
| UC-01 | ... | PASS/FAIL | ... |
### Bugs Found
1. BUG-001: [Description]
- Severity: [critical/high/medium/low]
- Steps to reproduce: ...
Phase 5: Bug Fix Cycle
For each bug found:
- Developer analyzes root cause
- Developer implements fix
- Developer writes regression test
- Tester re-verifies the specific use case
- Repeat until all bugs fixed
Phase 6: Full Regression Test
After all bugs are fixed:
- Run complete test suite:
pnpm test - Verify coverage thresholds met (95% lines, 90% branches)
- Re-execute ALL use cases from Testing Manual for affected features
- Generate final test report
IMPORTANT: Only mark feature as complete when:
- All unit tests pass
- Coverage thresholds met
- All defined use cases pass
- No open bugs remain
Commands
Replace $DASHBOARD with the detected dashboard directory (e.g., admin-dashboard-v2):
# Run all tests
cd planted-availability-db/packages/$DASHBOARD && pnpm test
# Run tests with coverage
cd planted-availability-db/packages/$DASHBOARD && pnpm test:coverage
# Run specific test file
cd planted-availability-db/packages/$DASHBOARD && pnpm test [path/to/test.tsx]
# Run tests in watch mode
cd planted-availability-db/packages/$DASHBOARD && pnpm test:watch
# Build check
cd planted-availability-db/packages/$DASHBOARD && pnpm build
# Lint check
cd planted-availability-db/packages/$DASHBOARD && pnpm lint
Feature Discovery
When working on a NEW feature not in the Testing Manual:
Explore the feature area:
# Find related components ls planted-availability-db/packages/$DASHBOARD/src/features/ ls planted-availability-db/packages/$DASHBOARD/src/pages/Find existing tests as patterns:
# List test files for similar features find planted-availability-db/packages/$DASHBOARD -name "*.test.tsx" -o -name "*.test.ts"Define NEW use cases following the format in TESTING-MANUAL.md
Add new use cases to TESTING-MANUAL.md for future regression testing
Reference Documents
TESTING-MANUAL.md- Test case library (expand as needed)TEST-REPORT-TEMPLATE.md- Template for test reports- Dashboard's
vitest.config.tsorjest.config.js- Test configuration - Dashboard's
package.json- Available scripts