| name | validate-refactoring |
| description | Comprehensive validation that all refactor-feature steps were completed correctly and the refactored code follows all repo best practices. Use after completing a refactoring to ensure quality, completeness, and adherence to standards. Generates detailed validation report. |
Validate Refactoring Skill
A systematic validation skill that audits refactored features to ensure:
- All refactor-feature workflow steps were completed
- Code follows all repository best practices
- Architecture patterns are consistent
- Documentation is complete
- Production validation passed
When to Use This Skill
Use this skill when:
- Refactoring is marked "complete" and needs final validation
- User requests "validate the refactoring" or "audit the changes"
- Before marking a feature as production-ready
- As final quality gate before closing refactoring tasks
- When investigating if best practices were followed
IMPORTANT: This is a READ-ONLY validation skill. It identifies issues but does not fix them. Use refactor-feature or other skills to fix identified issues.
Agent Swarm Architecture
CRITICAL: This skill uses parallel sequential agents to validate different aspects simultaneously, then consolidates findings into a comprehensive report.
Agent Distribution
Launch 5 parallel agents, each handling specific validation domains:
- Agent 1: Source & Architecture Validation - Validates source comparison, models/endpoints, parameters, migrations
- Agent 2: Type System & API Validation - Validates TypeScript patterns, placeholder content (CRITICAL), API routes, service layer
- Agent 3: Frontend & UI/UX Validation - Validates React components, UI/UX match, feature parity
- Agent 4: Security & Error Handling Validation - Validates security patterns, error handling, RLS
- Agent 5: Production & Performance Validation - Validates production logs, performance, Axiom logging
Agent Coordination
// Launch all agents in parallel
const agents = await Promise.all([
launchAgent1_SourceArchitectureValidation(),
launchAgent2_TypeSystemAPIValidation(),
launchAgent3_FrontendUIValidation(),
launchAgent4_SecurityErrorValidation(),
launchAgent5_ProductionPerformanceValidation(),
]);
// Consolidate findings
const consolidatedReport = consolidateAgentFindings(agents);
// Generate final report
generateValidationReport(consolidatedReport);
Sequential Steps Within Each Agent
Each agent follows sequential steps within their domain:
- Read relevant files
- Run validation checks
- Document findings
- Assign severity (CRITICAL/WARNING/RECOMMENDATION)
- Return findings to main process
Validation Workflow
Phase 0: Setup Validation Context
Step 0.1: Identify Feature Being Validated
Gather context about the refactoring:
# Get recent commits
git log --oneline --since="7 days ago" | head -20
# Check what files were changed
git diff HEAD~10..HEAD --name-only | sort -u
# Read recent commit messages for feature name
git log --pretty=format:"%s" --since="7 days ago" | grep -i "feat\|refactor"
Ask user if not clear:
- What feature was refactored?
- Where is the reference/original implementation? (CRITICAL for source comparison)
- What are the main files/routes involved?
- What was the scope (frontend, backend, full-stack)?
Step 0.2: Read Validation Standards
Read all relevant best practice documentation:
# Core standards
cat docs/CODING_BEST_PRACTICES.md
cat docs/ARCHITECTURE_OVERVIEW.md
cat docs/STYLE_GUIDE.md
cat docs/TESTING_GUIDE.md
cat .env.example
# API standards
cat docs/api/API_GUIDE.md
cat docs/api/API_REFERENCE.md
# Security
cat docs/security/SECURITY_GUIDE.md
Step 0.3: Read Original Source (CRITICAL)
If refactoring from a reference implementation, read the original source for FUNCTIONALITY:
IMPORTANT DISTINCTION - What to Compare:
When validating against a reference implementation, we check TWO types of fidelity:
FUNCTIONAL Fidelity (MUST MATCH 100%):
- Features and capabilities
- Models used (e.g., fal-ai/flux-dev)
- Model parameters (image_size, guidance_scale, etc.)
- UI/UX and layout
- User flows and interactions
- Business logic and validation
IMPLEMENTATION Patterns (MUST USE REPO PATTERNS):
- ✅ EXPECT DIFFERENT: Queue architecture (repo uses Supabase queues, not direct API calls)
- ✅ EXPECT DIFFERENT: Logging (repo uses Axiom, not console.log)
- ✅ EXPECT DIFFERENT: Auth (repo uses withAuth middleware)
- ✅ EXPECT DIFFERENT: Rate limiting (repo uses DB-backed)
- ✅ EXPECT DIFFERENT: Deployment (repo uses Vercel)
- ✅ EXPECT DIFFERENT: Testing (repo's testing patterns)
- ✅ EXPECT DIFFERENT: Database schema structure (repo's migration patterns)
# Read original implementation files
cat [reference-path]/*.ts
cat [reference-path]/*.tsx
cat [reference-path]/route.ts
cat [reference-path]/README.md
# Identify FUNCTIONAL patterns to verify (these should match)
grep -r "fal-ai/\|gpt-image-\|replicate\|openai" [reference-path] # Models (MUST MATCH)
grep -r "image_size\|guidance_scale\|num_inference_steps" [reference-path] # Parameters (MUST MATCH)
# Identify IMPLEMENTATION patterns (these may differ - check repo patterns used)
grep -r "axiomLogger\|log\.info\|log\.error" [reference-path] # Logging (check POINTS, not method)
grep -r "queue\|direct.*call" [reference-path] # Queue architecture (repo uses queues)
grep -r "supabase.*migration\|CREATE TABLE\|ALTER TYPE" [reference-path] # Schema (check tables/columns exist)
Create source comparison matrix:
| Aspect | Original Source | Current Implementation | Expected Match? |
|---|---|---|---|
| Models (FUNCTIONAL) | [List] | [List] | ✅ MUST MATCH |
| Parameters (FUNCTIONAL) | [List] | [List] | ✅ MUST MATCH |
| Features (FUNCTIONAL) | [List] | [List] | ✅ MUST MATCH |
| UI/UX (FUNCTIONAL) | [Layout] | [Layout] | ✅ MUST MATCH |
| Queue Architecture (IMPL) | [Pattern] | [Supabase queues] | ⚠️ DIFFERENT OK |
| Logging Method (IMPL) | [Pattern] | [Axiom] | ⚠️ DIFFERENT OK |
| Auth Pattern (IMPL) | [Pattern] | [withAuth] | ⚠️ DIFFERENT OK |
| Logging Points (FUNCTIONAL) | [When logged] | [Same points] | ✅ MUST MATCH |
| Database Tables/Columns (SCHEMA) | [List] | [List] | ✅ MUST MATCH |
| Migration Structure (IMPL) | [Pattern] | [Repo's migration patterns] | ⚠️ DIFFERENT OK |
Phase 1: Refactor Workflow Validation
Validate each phase of the refactor-feature skill was completed correctly.
Validation 1.1: Environment Variable Verification
Check: Were environment variables verified?
# Check for env var usage in new code
grep -r "process\.env\." [feature-files]
# Verify each variable exists in .env.example
while IFS= read -r var; do
if grep -q "^${var}=" .env.example; then
echo "✅ $var documented in .env.example"
else
echo "❌ $var MISSING from .env.example"
fi
done < <(grep -ro "process\.env\.[A-Z_]*" [feature-files] | sed 's/process\.env\.//' | sort -u)
# Check for wrong naming patterns
grep -r "process\.env\.FAL_KEY" [feature-files] # Should be FAL_API_KEY
grep -r "process\.env\.GOOGLE_CREDENTIALS" [feature-files] # Should be GOOGLE_SERVICE_ACCOUNT
grep -r "process\.env\.SUPABASE_URL" [feature-files] # Should be NEXT_PUBLIC_SUPABASE_URL
Checklist:
- All
process.env.*variables documented in .env.example - No incorrect variable names (FAL_KEY, GOOGLE_CREDENTIALS, etc.)
- Client-side variables use NEXTPUBLIC prefix
- API keys use _API_KEY suffix (not _KEY)
- New variables have documentation comments in .env.example
- User was notified of missing variables (check commit messages/PRs)
Report Issues:
- ❌ Environment variable
Xused but not in .env.example - ❌ Using deprecated variable name
FAL_KEYinstead ofFAL_API_KEY - ❌ Client-side variable
SUPABASE_URLmissing NEXTPUBLIC prefix
Validation 1.2: Duplicate Route Check
Check: Were existing routes verified to avoid duplication?
# Find all routes in feature
find [feature-path] -name "route.ts" -o -name "route.js"
# Check for similar routes in app/api
find app/api -name "route.ts" | xargs grep -l "export async function POST\|export async function GET"
# Search for similar functionality
grep -r "processing_jobs.*insert" app/api/
grep -r "parallel.*generation" app/api/
Checklist:
- No duplicate routes with same path
- No duplicate functionality in different routes
- Existing routes were checked before creating new ones
- Deprecated routes documented (if replacing old implementation)
- Route paths follow repo conventions
Report Issues:
- ❌ Duplicate route:
/api/feature/createexists in two locations - ❌ Functionality duplicated: image generation in both
/api/image/generateand/api/generate/image
Validation 1.3: Queue Architecture Verification
Check: Does implementation use THIS repo's Supabase queue pattern (even if original used direct calls)?
IMPORTANT: Original may use direct API calls - that's OK! We validate that OUR implementation uses queues:
- ✅ Original uses direct API calls → Implementation should use Supabase queues (repo pattern)
- ✅ Original uses different queue system → Implementation should use Supabase queues (repo pattern)
- ❌ Implementation uses direct API calls → FAIL (must use repo's queue pattern)
# Check for direct AI API calls in routes (❌ BAD in OUR implementation)
grep -r "fal\.run\|fal\.queue\|replicate\.run\|openai\.createImage" app/api/[feature]/
# Check for job creation (✅ GOOD - our repo pattern)
grep -r "processing_jobs.*insert\|createJob" app/api/[feature]/
# Verify job types in database enum
supabase db remote sql "
SELECT enumlabel FROM pg_enum
JOIN pg_type ON pg_enum.enumtypid = pg_type.oid
WHERE pg_type.typname = 'job_type'
"
# Check worker handles job type
grep -r "case JobType\.[FEATURE_JOB_TYPE]" lib/workers/
Checklist:
- No direct AI API calls in OUR route handlers (repo uses queues)
- Jobs created and inserted into
processing_jobstable - Job type exists in database enum
- Worker implemented to process job type
- Frontend polls job status (not waiting for sync response)
- Job status updates correctly (pending → processing → completed)
Report Issues:
- ❌ CRITICAL: Direct AI API call in route handler - Must use Supabase queue pattern:
app/api/feature/route.ts:45 - ❌ CRITICAL: Job type
parallel-image-generationnot in worker switch statement - ❌ Frontend waits for sync response instead of polling job status
NOT Issues (if original had these):
- ✅ OK: Original used direct API calls (we correctly converted to queues)
- ✅ OK: Original used different queue system (we correctly use Supabase queues)
Validation 1.4: Type System Validation
Check: Do TypeScript types match database schema?
# Read latest migrations
ls -t supabase/migrations/*.sql | head -5 | xargs cat
# Check types/supabase.ts matches migrations
grep -A 20 "Database\[" types/supabase.ts
# Check for type mismatches
grep -r "increment_rate_limit" types/supabase.ts
grep -r "increment_rate_limit" lib/rateLimit.ts
Checklist:
- Function signatures in
types/supabase.tsmatch migrations - Parameter names match exactly (e.g.,
p_rate_keynotrate_key) - Return types match (single value vs array)
- Enum values in TypeScript match database enums
- Branded types used correctly for IDs
- No
anytypes used - All function return types specified
Report Issues:
- ❌ Type mismatch:
increment_rate_limitexpectsrate_keybut migration hasp_rate_key - ❌ Return type mismatch: function returns single value but types expect array
- ❌ Missing branded type: Using
stringinstead ofUserIdfor user IDs
Validation 1.5: API Documentation Verification
Check: Were APIs verified against latest documentation?
# Check for API service files
find lib/services -name "*Service.ts"
# Look for API calls
grep -r "fal\.run\|replicate\.run\|openai\." lib/services/
# Check for model IDs
grep -r "fal-ai/\|gpt-image-\|ideogram-v\|imagen-" lib/
Use Firecrawl to verify current API docs:
// Check FAL AI
await mcp__firecrawl__firecrawl_search({
query: 'fal ai flux dev api parameters 2025',
limit: 3,
});
// Check model names
await mcp__firecrawl__firecrawl_scrape({
url: 'https://fal.ai/models',
formats: ['markdown'],
});
Checklist:
- All model IDs match latest documentation (2025)
- No deprecated model names (dalle-3, ideogram-v2, etc.)
- Parameter names use correct format (snake_case vs camelCase)
- Authentication headers use correct format
- API endpoints are current (not deprecated)
- Response parsing handles documented formats
- Error handling covers documented error codes
Report Issues:
- ❌ Using deprecated model:
dalle-3instead ofgpt-image-1 - ❌ Wrong parameter name:
imageSizeshould beimage_size - ❌ Deprecated endpoint: Using old FAL AI URL format
Validation 1.6: Feature Parity Validation
Check: Does implementation match reference 100%?
If reference implementation provided, verify:
# Compare file structures
diff -r [reference-path] [implementation-path] --brief
# Check for missing features
# Read reference implementation
# Read current implementation
# Create feature checklist
Checklist:
- Feature parity report exists in docs/reports/
- Parity score is 100% (X/X features)
- All user flows from reference work
- No missing functionality
- Edge cases handled identically
- Error states match reference
- Loading states match reference
- Success states match reference
Report Issues:
- ❌ Feature parity: 8/10 features (missing: prompt optimization, creative variations)
- ❌ Missing feature: Image upload drag-and-drop not implemented
- ❌ Different behavior: Error handling shows generic message instead of specific error
Validation 1.7: UI/UX Match Validation
Check: Does UI match reference or document improvements?
# Find component files
find [feature-path] -name "*.tsx" -o -name "*.jsx"
# Check for styling
grep -r "className\|tw-\|css" [feature-files]
# Look for UI improvement documentation
ls docs/reports/*UI*
ls docs/reports/*UX*
Checklist:
- Layout matches reference OR improvements documented
- Component placement identical to reference
- Colors, fonts, spacing match
- Responsive behavior works at all breakpoints
- Animations/transitions work identically
- Modal/dialog behavior matches
- Keyboard navigation identical
- If UI changed: improvements documented and approved
- Screenshots comparing reference vs implementation
Report Issues:
- ❌ Layout mismatch: Grid columns differ from reference (4 vs 3)
- ❌ Spacing incorrect: Button margin should be 16px not 8px
- ❌ UI change undocumented: Added new filter panel without documenting why
Validation 1.8: Production Testing Validation
Check: Was production validated with Chrome DevTools and Axiom?
# Check Axiom for recent errors
mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(24h)
| where ['severity'] == "error"
| where ['message'] contains "[feature-name]" or ['url'] contains "[feature-path]"
| summarize count() by ['message']
| order by ['count_'] desc
`
})
# Check for rate limit fallbacks
mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(24h)
| where ['message'] contains "rate limit" and ['message'] contains "fallback"
| count
`
})
Checklist:
- Feature tested in production (not just local)
- Chrome DevTools MCP used to test user flows
- No console errors in browser
- All API endpoints return 2xx
- Axiom shows 0 errors in last 24h for feature
- No rate limit fallbacks to memory
- Jobs created and processed successfully
- At least 3 iterations of testing performed
- All errors found were fixed recursively
Report Issues:
- ❌ Production not tested: No evidence of Chrome DevTools testing
- ❌ Axiom shows errors: 15 errors in last 24h related to feature
- ❌ Rate limit fallback: Using in-memory rate limiting instead of database
Validation 1.9: Model & Endpoint Verification (CRITICAL)
Check: Are the EXACT same models and endpoints used as the original source?
MUST READ original source first (see Step 0.3):
# Extract models from original source
grep -ro "fal-ai/[a-z0-9-]*\|gpt-image-[0-9]\|ideogram-v[0-9]\|imagen-[0-9]\|replicate/[a-z0-9-]*" [reference-path] | sort -u > /tmp/original_models.txt
# Extract models from current implementation
grep -ro "fal-ai/[a-z0-9-]*\|gpt-image-[0-9]\|ideogram-v[0-9]\|imagen-[0-9]\|replicate/[a-z0-9-]*" [implementation-path] | sort -u > /tmp/current_models.txt
# Compare
diff /tmp/original_models.txt /tmp/current_models.txt
# Extract API endpoints from original
grep -ro "https://[a-z0-9.-]*/api/[a-z0-9/-]*\|fal\.run\|replicate\.run\|openai\." [reference-path] | sort -u > /tmp/original_endpoints.txt
# Extract API endpoints from current
grep -ro "https://[a-z0-9.-]*/api/[a-z0-9/-]*\|fal\.run\|replicate\.run\|openai\." [implementation-path] | sort -u > /tmp/current_endpoints.txt
# Compare
diff /tmp/original_endpoints.txt /tmp/current_endpoints.txt
Model Comparison Checklist:
- Every model from original source is present in implementation
- No models changed (e.g.,
fal-ai/flux-devvsfal-ai/flux-pro) - Model IDs match exactly (case-sensitive)
- No deprecated models used
- Model versions match (e.g.,
ideogram-v2vsideogram-v2.1)
Endpoint Comparison Checklist:
- Every endpoint from original is present
- API URLs match exactly
- HTTP methods match (GET, POST, etc.)
- Authentication patterns match
- Base URLs match (fal.run vs fal.ai)
Create detailed comparison table:
| Model/Endpoint | Original | Current | Status | Issue |
|---|---|---|---|---|
| Image model | fal-ai/flux-dev |
fal-ai/flux-pro |
❌ | Model changed without documentation |
| FAL endpoint | fal.run() |
fal.queue.submit() |
⚠️ | Verify intentional (queue vs sync) |
| OpenAI model | gpt-image-1 |
gpt-image-1 |
✅ | Exact match |
Report Issues:
- ❌ CRITICAL: Model mismatch - Original uses
fal-ai/flux-dev, implementation usesfal-ai/flux-pro - ❌ CRITICAL: Missing model - Original uses
replicate/stable-diffusion-xl, not found in implementation - ❌ CRITICAL: Endpoint changed - Original uses
fal.run()(sync), implementation usesfal.queue.submit()(async) without documentation - ⚠️ Different API base URL - Original uses
https://api.fal.ai/v1, implementation useshttps://fal.run
Validation 1.10: Parameter Matching Verification (CRITICAL)
Check: Do ALL parameters match the original source exactly?
Extract parameters from original source:
# Read original API calls
grep -A 20 "fal\.run\|replicate\.run\|openai\." [reference-path] > /tmp/original_api_calls.txt
# Extract parameter names
grep -Eo "(image_size|guidance_scale|num_inference_steps|prompt|negative_prompt|seed|width|height|steps|cfg_scale|sampler|model)" /tmp/original_api_calls.txt | sort -u > /tmp/original_params.txt
# Read current API calls
grep -A 20 "fal\.run\|replicate\.run\|openai\." [implementation-path] > /tmp/current_api_calls.txt
# Extract parameter names
grep -Eo "(image_size|guidance_scale|num_inference_steps|prompt|negative_prompt|seed|width|height|steps|cfg_scale|sampler|model)" /tmp/current_api_calls.txt | sort -u > /tmp/current_params.txt
# Compare
diff /tmp/original_params.txt /tmp/current_params.txt
Read both implementations side-by-side:
// Original source
const originalParams = {
image_size: '1024x1024',
guidance_scale: 7.5,
num_inference_steps: 50,
prompt: userPrompt,
seed: 42,
};
// Current implementation
const currentParams = {
imageSize: '1024x1024', // ❌ WRONG: snake_case changed to camelCase
guidanceScale: 7.5, // ❌ WRONG: parameter name format changed
steps: 50, // ❌ WRONG: 'num_inference_steps' renamed to 'steps'
prompt: userPrompt, // ✅ CORRECT
// ❌ MISSING: 'seed' parameter not included
};
Parameter Comparison Checklist:
- Every parameter from original is present
- Parameter names match EXACTLY (snake_case vs camelCase)
- Parameter values match (types, defaults, ranges)
- No missing parameters
- No extra parameters (unless documented as enhancement)
- Optional parameters handled identically
- Parameter validation matches original
Create detailed parameter comparison table:
| Parameter | Original | Current | Status | Issue |
|---|---|---|---|---|
| Prompt field | prompt |
prompt |
✅ | Match |
| Image size | image_size |
imageSize |
❌ | Format changed (snake to camel) |
| Guidance | guidance_scale: 7.5 |
guidanceScale: 7.5 |
❌ | Format changed |
| Steps | num_inference_steps: 50 |
steps: 50 |
❌ | Parameter renamed |
| Seed | seed: 42 |
(missing) | ❌ | Parameter missing |
| Width | width: 1024 |
width: 1024 |
✅ | Match |
Report Issues:
- ❌ CRITICAL: Parameter name mismatch -
image_sizechanged toimageSize(API expects snake_case) - ❌ CRITICAL: Parameter renamed -
num_inference_stepschanged tosteps(API may not recognize) - ❌ CRITICAL: Missing parameter -
seedparameter from original not included in implementation - ❌ Parameter value mismatch - Default
guidance_scaleis 7.5 in original, 10 in implementation - ⚠️ Extra parameter - Implementation adds
safety_checker: falsenot in original (verify intended)
Validation 1.11: Axiom Logging Verification (CRITICAL)
Check: Are logging POINTS present matching the original source (functional), using THIS repo's Axiom pattern (implementation)?
IMPORTANT: We validate FUNCTIONAL logging (when/what is logged), NOT implementation method:
- ✅ SAME: Logging at same points (start, complete, error, key milestones)
- ✅ SAME: Context fields included (userId, projectId, model, etc.)
- ✅ DIFFERENT (expected): Logging method (repo uses
axiomLogger, original may useconsole.logor other) - ✅ DIFFERENT (expected): Log structure (repo's Axiom format vs original's format)
Read original logging pattern:
# Extract logging from original
grep -rn "axiomLogger\|log\.\(info\|error\|warn\|debug\)" [reference-path] > /tmp/original_logging.txt
# Analyze pattern
cat /tmp/original_logging.txt
Read current logging pattern:
# Extract logging from current implementation
grep -rn "axiomLogger\|log\.\(info\|error\|warn\|debug\)" [implementation-path] > /tmp/current_logging.txt
# Analyze pattern
cat /tmp/current_logging.txt
Compare logging locations and patterns:
// Original source logging pattern
axiomLogger.info('Feature started', {
userId: user.id,
projectId,
feature: 'image-generation',
model: selectedModel,
timestamp: new Date().toISOString(),
});
// ... operation ...
axiomLogger.info('Feature completed', {
userId: user.id,
jobId: job.id,
duration: Date.now() - startTime,
});
// Error logging
axiomLogger.error('Feature failed', {
userId: user.id,
error: error.message,
stack: error.stack,
});
Logging Comparison Checklist:
- Axiom logger used (not console.log)
- Logging at same points as original (start, complete, error)
- Same log levels used (info, error, warn, debug)
- Same context fields included (userId, projectId, etc.)
- Error logging includes stack traces
- Structured logging format matches
- No PII/sensitive data logged
- Logging density matches (not too verbose, not too sparse)
Create logging comparison table:
| Log Point | Original | Current | Status | Issue |
|---|---|---|---|---|
| Operation start | axiomLogger.info('Started') |
axiomLogger.info('Started') |
✅ | Match |
| Operation complete | axiomLogger.info('Completed') |
(missing) | ❌ | Missing log |
| Error handling | axiomLogger.error(...) with stack |
console.error(...) |
❌ | Using console instead of Axiom |
| Context fields | {userId, projectId, model} |
{userId, projectId} |
❌ | Missing model field |
Verify logging in production:
// Check Axiom for actual logs from feature
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(24h)
| where ['feature'] == "[feature-name]" or ['message'] contains "[feature-name]"
| project ['_time'], ['level'], ['message'], ['userId'], ['projectId']
| limit 20
`,
});
Report Issues (FUNCTIONAL problems only):
- ❌ CRITICAL: Missing logging point - Original logs at job completion, implementation doesn't log at that point
- ❌ CRITICAL: Missing context field - Original includes
modelfield, implementation doesn't (functional data missing) - ⚠️ Logging density - Implementation has 2 log points, original has 5 (missing intermediate milestone logs)
- ❌ CRITICAL: No production logs found - Feature may not be logging to Axiom at all (run test and check Axiom)
NOT Issues (implementation differences - expected):
- ✅ OK: Original uses
console.log, implementation usesaxiomLogger(repo pattern) - ✅ OK: Log message format differs (repo uses structured Axiom format)
- ✅ OK: Log levels structured differently (repo's Axiom pattern)
Validation 1.12: Supabase Migration Completeness (CRITICAL)
Check: Are ALL Supabase migrations from original source applied and complete?
Read original migrations:
# List original migrations
ls -la [reference-path]/supabase/migrations/*.sql
# Read each migration
for migration in [reference-path]/supabase/migrations/*.sql; do
echo "=== $migration ==="
cat "$migration"
echo ""
done > /tmp/original_migrations.txt
Check current migrations:
# List current migrations
supabase migration list
# Check applied vs local
supabase migration list | grep -E "local|remote"
# Read current migrations
ls -la supabase/migrations/*.sql
cat supabase/migrations/*[feature-name]*.sql
Migration comparison checklist:
- All tables from original exist in current
- All columns from original exist in current
- All enums from original exist in current
- All RLS policies from original exist in current
- All indexes from original exist in current
- All functions from original exist in current
- Column types match exactly
- Foreign key constraints match
- Default values match
- NOT NULL constraints match
Verify migrations applied:
# Check database for tables
supabase db remote sql "
SELECT table_name, column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema = 'public'
AND table_name IN ('[table1]', '[table2]')
ORDER BY table_name, ordinal_position;
"
# Check for enum values
supabase db remote sql "
SELECT enumlabel FROM pg_enum
JOIN pg_type ON pg_enum.enumtypid = pg_type.oid
WHERE pg_type.typname = '[enum_name]'
ORDER BY enumsortorder;
"
# Check for RLS policies
supabase db remote sql "
SELECT schemaname, tablename, policyname, permissive, roles, qual, with_check
FROM pg_policies
WHERE tablename IN ('[table1]', '[table2]');
"
Create migration comparison table:
| Database Object | Original | Current | Status | Issue |
|---|---|---|---|---|
Table: processing_jobs |
Exists with 15 columns | Exists with 15 columns | ✅ | Match |
Column: job_type |
job_type enum |
job_type enum |
✅ | Match |
Enum value: image-generation |
Present | Present | ✅ | Match |
Enum value: parallel-image-generation |
Present | (missing) | ❌ | Missing enum value |
RLS policy: Users can view own jobs |
Present | Present | ✅ | Match |
Index: idx_jobs_user_status |
Present | (missing) | ❌ | Missing index |
Function: increment_rate_limit |
(p_rate_key, p_max_count) |
(rate_key, max_count) |
❌ | Parameter names differ |
Check migration history:
# Compare migration counts
echo "Original migrations: $(ls [reference-path]/supabase/migrations/*.sql | wc -l)"
echo "Current migrations: $(ls supabase/migrations/*.sql | wc -l)"
# Check for missing migrations
diff <(ls [reference-path]/supabase/migrations/*.sql | xargs -n1 basename | sort) \
<(ls supabase/migrations/*.sql | xargs -n1 basename | sort)
Report Issues:
- ❌ CRITICAL: Missing enum value -
parallel-image-generationnot injob_typeenum - ❌ CRITICAL: Missing index -
idx_jobs_user_statusfrom original not created - ❌ CRITICAL: Function signature mismatch -
increment_rate_limitparameter names differ (will cause errors) - ❌ Missing table -
job_metadatatable from original not created - ❌ Missing RLS policy -
restrict_job_deletionpolicy not implemented - ❌ Column type mismatch -
metadataisjsonin original,jsonbin current - ⚠️ Migration not applied - Local migration exists but not applied to remote (check
supabase migration list) - ❌ CRITICAL: Schema drift - Database schema doesn't match latest migrations
Phase 2: Best Practices Validation
Validate code follows all repository coding standards.
Validation 2.1: TypeScript Best Practices
Read standards:
cat docs/CODING_BEST_PRACTICES.md | grep -A 50 "TypeScript"
Check implementation:
// Branded types for IDs
grep -r "type.*Id.*=.*string &" [feature-files] // ✅ Should have branded types
grep -r ": string" [feature-files] | grep -i "id" // ❌ Should use branded types
// Discriminated unions for errors
grep -r "type.*Error.*=" [feature-files]
// Assertion functions
grep -r "asserts.*is" [feature-files]
// No 'any' usage
grep -r ": any\|as any" [feature-files]
// Function return types
grep -r "function.*{$" [feature-files] | grep -v ": " // Functions missing return type
Checklist:
- Branded types used for all IDs (UserId, ProjectId, JobId, etc.)
- Discriminated unions used for error handling
- Assertion functions for type guards
- No
anytypes (useunknownor generics) - All functions have explicit return types
- Interfaces use PascalCase
- Types use PascalCase
- Enums use SCREAMING_SNAKE_CASE values
Report Issues:
- ❌ Not using branded types:
userId: stringshould beuserId: UserId - ❌ Using
any: Line 45 hasdata: anyinstead of proper type - ❌ Missing return type: Function at line 120 doesn't specify return type
Validation 2.1b: Placeholder Content Check (CRITICAL)
Check for unfinished implementation markers:
# Check for TODO/FIXME markers
grep -rn "TODO\|FIXME\|XXX\|HACK" [feature-files]
# Check for common placeholder strings
grep -rn "placeholder\|PLACEHOLDER\|your.*key.*here\|replace.*this\|update.*this\|change.*this" [feature-files] -i
# Check for test/mock data that shouldn't be in production
grep -rn "test@example\.com\|mock.*data\|dummy.*data\|fake.*data" [feature-files] -i
# Check for hardcoded values that should be env vars
grep -rn "sk-[a-zA-Z0-9]\{20,\}\|AIza[a-zA-Z0-9]\{35\}" [feature-files] # API keys
# Check for console.log that should be removed
grep -rn "console\.log\|console\.debug" [feature-files]
# Check for commented out code blocks
grep -rn "^.*//.*{$\|^.*//.*}$" [feature-files] | head -20
Checklist:
- No TODO/FIXME/XXX/HACK comments left in code
- No placeholder strings like "Your API key here", "Update this later"
- No test/mock/dummy data in production code
- No hardcoded API keys or secrets (should use env vars)
- No console.log/console.debug statements (should use Axiom logging)
- No large blocks of commented-out code
- All string literals are intentional, not placeholders
Report Issues:
- ❌ CRITICAL: TODO comment found at
app/api/feature/route.ts:45- "TODO: Add error handling" - ❌ CRITICAL: Placeholder string at
lib/services/feature.ts:120- "YOUR_API_KEY_HERE" - ❌ CRITICAL: Hardcoded API key at
lib/config.ts:15- "sk-proj-abc123..." - ❌ CRITICAL: Test data in production:
test@example.comatlib/utils/validation.ts:30 - ❌ console.log found at
components/Feature.tsx:78- Should use Axiom logging or remove - ⚠️ Commented code block: 50 lines commented at
lib/services/old-implementation.ts:100-150
Example Good Implementations:
// ✅ GOOD - Using env var, not placeholder
const apiKey = process.env.FAL_API_KEY;
if (!apiKey) throw new Error('FAL_API_KEY not configured');
// ✅ GOOD - Using Axiom logging, not console.log
log.info('Processing image generation', { jobId, userId });
// ✅ GOOD - No placeholder text
const defaultEmail = user.email || 'guest@example.com';
Example Bad Implementations:
// ❌ BAD - Placeholder content
const apiKey = 'YOUR_API_KEY_HERE';
// ❌ BAD - TODO comment
// TODO: Add proper error handling later
// ❌ BAD - Console.log
console.log('Debug info:', data);
// ❌ BAD - Commented code that should be removed
// const oldFunction = () => {
// // 50 lines of old implementation...
// }
Validation 2.2: React Component Best Practices
Check component structure:
# Find React components
find [feature-path] -name "*.tsx" | xargs grep -l "export.*function\|export default"
# Check for forwardRef usage
grep -r "forwardRef" [feature-files]
# Check hook order
grep -r "useState\|useEffect\|useContext\|useMemo" [feature-files]
# Check memoization
grep -r "useMemo\|useCallback\|React.memo" [feature-files]
Checklist:
- Reusable components use
forwardRef - Hooks follow correct order (context → state → refs → effects → custom)
- Expensive computations use
useMemo - Callback functions use
useCallbackwhen passed to children - Component files follow naming convention (PascalCase)
- Props interfaces defined with
interface ComponentProps - Event handlers named
handleEventName - Custom hooks start with
useprefix
Report Issues:
- ❌ Hook order incorrect:
useEffectcalled beforeuseState - ❌ Missing memoization: Expensive calculation not wrapped in
useMemo - ❌ Missing forwardRef: Reusable input component doesn't use
forwardRef
Validation 2.3: API Route Best Practices
Check API route structure:
# Find API routes
find [feature-path] -path "*/api/*" -name "route.ts"
# Check for withAuth middleware
grep -r "withAuth" [feature-api-routes]
# Check for rate limiting
grep -r "checkRateLimit\|RATE_LIMITS" [feature-api-routes]
# Check for input validation
grep -r "validate.*\|ValidationError" [feature-api-routes]
# Check error responses
grep -r "errorResponse\|successResponse" [feature-api-routes]
Checklist:
- All routes use
withAuthmiddleware (unless public) - Rate limiting applied with appropriate tier
- All inputs validated with assertion functions
- Service layer used for business logic (not in route handler)
- Errors handled with
errorResponsehelpers - Success responses use consistent format
- JSDoc comments document route, params, returns
- Error codes follow HttpStatusCode enum
Report Issues:
- ❌ Missing auth: Route doesn't use
withAuthmiddleware - ❌ Missing rate limit: No rate limiting applied
- ❌ No input validation: Request body not validated
- ❌ Business logic in route: Complex logic should be in service layer
Validation 2.4: Service Layer Best Practices
Check service implementation:
# Find service files
find lib/services -name "*Service.ts"
# Check for dependency injection
grep -r "constructor.*:" lib/services/[feature]Service.ts
# Check for error handling
grep -r "try.*catch\|throw" lib/services/[feature]Service.ts
# Check for caching
grep -r "cache\|get.*from.*cache" lib/services/[feature]Service.ts
Checklist:
- Services in
/lib/services/directory - Dependencies passed via constructor (dependency injection)
- All methods have try-catch error handling
- Errors tracked with context
- Caching implemented where appropriate
- Cache invalidated after mutations
- Service methods are focused and single-purpose
- Return types are explicit
Report Issues:
- ❌ No dependency injection: Service directly imports Supabase client
- ❌ Missing error handling: Method doesn't catch errors
- ❌ No caching: Repeated queries not cached
Validation 2.5: Database & RLS Validation
Check database patterns:
# Check migrations
ls -t supabase/migrations/*.sql | head -5
# Check for RLS policies
grep -i "policy\|rls" supabase/migrations/*.sql
# Check for indexes
grep -i "index" supabase/migrations/*.sql
# Verify ownership checks
grep -r "user_id.*=.*auth.uid()" [feature-files]
Checklist:
- Migrations include RLS policies
- All user data has
user_idcolumn - RLS policies verify ownership
- Indexes created for query performance
- Foreign keys use correct types
- Enums used instead of strings for fixed values
- Timestamps use
timestamptz - Soft deletes use
deleted_at(not hard deletes)
Report Issues:
- ❌ Missing RLS: Table has no Row Level Security policies
- ❌ No ownership check: RLS doesn't verify
user_id = auth.uid() - ❌ Missing index: No index on frequently queried column
Validation 2.6: Error Handling Validation
Check error patterns:
# Find custom error classes
grep -r "class.*Error.*extends" lib/errors/
# Check error tracking
grep -r "errorTracker\|trackError" [feature-files]
# Check error responses
grep -r "errorResponse" [feature-files]
# Check try-catch usage
grep -r "try\s*{" [feature-files]
Checklist:
- Custom error classes extend base Error
- Errors tracked with context (errorTracker)
- User-friendly messages provided
- Stack traces captured (in dev)
- Error types: ValidationError, DatabaseError, etc.
- Graceful fallbacks implemented
- No silent error swallowing (empty catch blocks)
- API errors return appropriate HTTP status codes
Report Issues:
- ❌ Silent error: Empty catch block swallows errors
- ❌ No error tracking: Errors not logged to monitoring
- ❌ Generic error message: Should provide specific user-friendly message
Validation 2.7: Security Validation
Check security patterns:
# Check input validation
grep -r "validate.*String\|validate.*UUID" [feature-files]
# Check for SQL injection risks
grep -r "raw.*sql\|exec.*query" [feature-files]
# Check for XSS risks
grep -r "dangerouslySetInnerHTML\|innerHTML" [feature-files]
# Check for exposed secrets
grep -ri "api_key.*=.*['\"]" [feature-files]
Checklist:
- All user inputs validated
- No SQL injection risks (using parameterized queries)
- No XSS risks (no innerHTML, sanitized user content)
- No hardcoded secrets or API keys
- RLS policies enforce ownership
- Rate limiting prevents abuse
- Authentication required for protected routes
- CORS configured correctly
Report Issues:
- ❌ SQL injection risk: Using string concatenation for query
- ❌ Missing validation: User input not validated
- ❌ Hardcoded secret: API key found in code
Validation 2.8: Testing Validation
Check test coverage:
# Find test files
find . -name "*.test.ts" -o -name "*.test.tsx" -o -name "*.spec.ts"
# Check test patterns
grep -r "describe.*it\|test(" [test-files]
# Check AAA pattern
grep -A 10 "it(" [test-files] | grep -c "// Arrange\|// Act\|// Assert"
# Run tests
npm run test
Checklist:
- Tests exist for feature
- Tests follow AAA pattern (Arrange-Act-Assert)
- Test names are descriptive
- Edge cases tested
- Error paths tested
- Helper functions used for common setup
- Mocks used appropriately
- All tests pass
Report Issues:
- ❌ No tests: Feature has no test files
- ❌ Poor test names: Test named "test1", "test2" instead of descriptive names
- ❌ Missing edge cases: Only happy path tested
Phase 3: Production Validation
Validation 3.1: Deployment Verification
Check deployment:
# Check recent deployments
vercel ls --prod | head -10
# Check build status
npm run build
# Check for build errors
cat .next/trace 2>/dev/null | grep -i error
Checklist:
- Latest deployment succeeded
- Build completes without errors
- No TypeScript errors
- No ESLint warnings
- All environment variables set in Vercel
- Production URL accessible
Report Issues:
- ❌ Build failed: TypeScript errors prevent deployment
- ❌ Missing env var: Variable not set in Vercel dashboard
Validation 3.2: Production Error Monitoring
Check Axiom logs:
// Check for errors in last 24h
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(24h)
| where ['severity'] == "error"
or ['level'] == "error"
or ['status'] >= 400
| summarize
error_count = count(),
unique_errors = dcount(['message']),
affected_users = dcount(['userId'])
| project error_count, unique_errors, affected_users
`,
});
// Check rate limiting
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(24h)
| where ['message'] contains "rate limit" and ['message'] contains "fallback"
| count
`,
});
Checklist:
- Zero errors in production logs (last 24h)
- No rate limit fallbacks
- No 500/400 errors
- No unhandled exceptions
- Logging is working (events appear in Axiom)
- Error tracking captures stack traces
Report Issues:
- ❌ Production errors: 25 errors in last 24h
- ❌ Rate limit fallback: Using in-memory rate limiting
- ❌ Logging broken: No logs appearing in Axiom
Validation 3.3: Performance Validation
Check performance:
// Query response times
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(24h)
| where ['url'] contains "[feature-path]"
| summarize
avg_duration = avg(['duration']),
p50 = percentile(['duration'], 50),
p95 = percentile(['duration'], 95),
p99 = percentile(['duration'], 99)
`,
});
Checklist:
- API response times < 1s (p95)
- No memory leaks detected
- No performance regressions vs baseline
- Database queries optimized (indexes used)
- No N+1 query problems
- Images optimized (Next.js Image component)
- Bundle size acceptable
Report Issues:
- ❌ Slow API: p95 response time is 3.5s (should be <1s)
- ❌ Memory leak: Memory usage grows continuously
- ❌ N+1 queries: Multiple sequential database queries
Phase 4: Documentation Validation
Validation 4.1: Code Documentation
Check JSDoc comments:
# Check API route documentation
grep -B 5 "export async function" [feature-api-routes] | grep "/\*\*"
# Check component documentation
grep -B 5 "export.*function.*Component" [feature-components] | grep "/\*\*"
# Check type documentation
grep -B 3 "interface\|type" [feature-types] | grep "/\*\*"
Checklist:
- All API routes have JSDoc comments
- Components have description comments
- Complex functions documented
- Types/interfaces documented
- Parameters documented with @param
- Return values documented with @returns
- Examples provided for complex usage
Report Issues:
- ❌ Missing documentation: API route has no JSDoc comment
- ❌ Incomplete docs: Function parameters not documented
Validation 4.2: Project Documentation
Check documentation files:
# Check if feature documented
ls docs/features/ | grep -i [feature-name]
# Check if API documented
ls docs/api/ | grep -i [feature-name]
# Check reports
ls docs/reports/ | grep -i [feature-name]
Checklist:
- Feature documented in docs/features/
- API endpoints documented in docs/api/
- Validation reports in docs/reports/
- ISSUES.md updated (resolved issues)
- CHANGELOG.md updated
- README updated if needed
- Migration guide created if breaking changes
Report Issues:
- ❌ Missing feature docs: No documentation in docs/features/
- ❌ ISSUES.md not updated: Resolved issues still marked as open
Phase 5: Generate Validation Report
Step 5.1: Compile All Findings
Create comprehensive validation report:
# Refactoring Validation Report: [Feature Name]
**Date**: [Date]
**Feature**: [Feature Name]
**Scope**: [Frontend/Backend/Full-stack]
**Files Changed**: X files
---
## Executive Summary
**Overall Status**: [✅ PASS / ⚠️ NEEDS WORK / ❌ FAIL]
- Refactor Workflow: X/12 phases passed (including 4 CRITICAL source comparison checks)
- Best Practices: X/9 categories passed (includes CRITICAL placeholder content check)
- Production Validation: X/3 checks passed
- Documentation: X/2 categories complete
**Critical Issues**: X
**Warnings**: Y
**Recommendations**: Z
**Source Comparison Summary**:
- Models/Endpoints: [✅ MATCH / ❌ MISMATCH]
- Parameters: [✅ MATCH / ❌ MISMATCH]
- Axiom Logging: [✅ MATCH / ❌ MISMATCH]
- Migrations: [✅ COMPLETE / ❌ INCOMPLETE]
---
## 1. Refactor Workflow Validation
### 1.1 Environment Variables
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- ✅ All variables documented in .env.example
- ❌ Using incorrect variable name: `FAL_KEY` should be `FAL_API_KEY`
- [List all findings]
### 1.2 Duplicate Routes
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 1.3 Queue Architecture
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 1.4 Type System
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 1.5 API Documentation
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 1.6 Feature Parity
**Status**: [✅ PASS / ❌ FAIL]
**Score**: X/Y features (Z%)
**Findings**:
- [List findings]
### 1.7 UI/UX Match
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 1.8 Production Testing
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 1.9 Model & Endpoint Verification (CRITICAL)
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
| Model/Endpoint | Original | Current | Status |
| -------------- | -------- | ------- | ------- |
| [Model 1] | [Value] | [Value] | [✅/❌] |
| [Model 2] | [Value] | [Value] | [✅/❌] |
**Issues**:
- [List all model/endpoint mismatches]
### 1.10 Parameter Matching (CRITICAL)
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
| Parameter | Original | Current | Status |
| --------- | -------- | ------- | ------- |
| [Param 1] | [Value] | [Value] | [✅/❌] |
| [Param 2] | [Value] | [Value] | [✅/❌] |
**Issues**:
- [List all parameter mismatches]
### 1.11 Axiom Logging (CRITICAL)
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
| Log Point | Original | Current | Status |
| --------- | --------- | --------- | ------- |
| [Point 1] | [Pattern] | [Pattern] | [✅/❌] |
| [Point 2] | [Pattern] | [Pattern] | [✅/❌] |
**Production Logs (24h)**: [Count] logs found
**Issues**:
- [List all logging issues]
### 1.12 Supabase Migration Completeness (CRITICAL)
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
| Database Object | Original | Current | Status |
| --------------- | --------- | --------- | ------- |
| [Object 1] | [Details] | [Details] | [✅/❌] |
| [Object 2] | [Details] | [Details] | [✅/❌] |
**Migration Status**: [X/Y applied]
**Issues**:
- [List all migration issues]
---
## 2. Best Practices Validation
### 2.1 TypeScript Best Practices
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.1b Placeholder Content Check (CRITICAL)
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
| Check Type | Count | Locations | Status |
| ---------------------- | ----- | ---------------- | ------- |
| TODO/FIXME markers | X | [List file:line] | [✅/❌] |
| Placeholder strings | X | [List file:line] | [✅/❌] |
| Test/mock data | X | [List file:line] | [✅/❌] |
| Hardcoded secrets | X | [List file:line] | [✅/❌] |
| console.log statements | X | [List file:line] | [✅/❌] |
| Commented code blocks | X | [List file:line] | [✅/❌] |
**Issues**:
- [List all placeholder content issues with severity]
### 2.2 React Components
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.3 API Routes
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.4 Service Layer
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.5 Database & RLS
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.6 Error Handling
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.7 Security
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 2.8 Testing
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
---
## 3. Production Validation
### 3.1 Deployment
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 3.2 Error Monitoring
**Status**: [✅ PASS / ❌ FAIL]
**Errors (24h)**: X errors
**Findings**:
- [List findings]
### 3.3 Performance
**Status**: [✅ PASS / ❌ FAIL]
**P95 Response Time**: Xms
**Findings**:
- [List findings]
---
## 4. Documentation Validation
### 4.1 Code Documentation
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
### 4.2 Project Documentation
**Status**: [✅ PASS / ❌ FAIL]
**Findings**:
- [List findings]
---
## 5. Critical Issues (MUST FIX)
1. **[Issue 1 Title]**
- **Location**: [File:line]
- **Impact**: [Description]
- **Fix**: [Recommendation]
2. **[Issue 2 Title]**
- [Details]
[List all critical issues]
---
## 6. Warnings (SHOULD FIX)
1. **[Warning 1 Title]**
- [Details]
[List all warnings]
---
## 7. Recommendations (NICE TO HAVE)
1. **[Recommendation 1]**
- [Details]
[List all recommendations]
---
## 8. Validation Checklist
**Refactor Workflow** (X/12):
- [ ] Environment variables verified
- [ ] Duplicate routes checked
- [ ] Queue architecture validated
- [ ] Type system consistent
- [ ] API documentation verified
- [ ] Feature parity 100%
- [ ] UI/UX matches or improved
- [ ] Production tested
- [ ] **Models & endpoints match original exactly (CRITICAL)**
- [ ] **Parameters match original exactly (CRITICAL)**
- [ ] **Axiom logging present and matches original (CRITICAL)**
- [ ] **Supabase migrations complete and applied (CRITICAL)**
**Best Practices** (X/8):
- [ ] TypeScript patterns followed
- [ ] React patterns followed
- [ ] API route patterns followed
- [ ] Service layer patterns followed
- [ ] Database patterns followed
- [ ] Error handling patterns followed
- [ ] Security patterns followed
- [ ] Testing patterns followed
**Production** (X/3):
- [ ] Deployment successful
- [ ] No production errors
- [ ] Performance acceptable
**Documentation** (X/2):
- [ ] Code documented
- [ ] Project docs updated
---
## 9. Next Steps
### If PASS:
- ✅ Refactoring is complete and follows all standards
- ✅ Feature is production-ready
- ✅ Close refactoring task
### If NEEDS WORK:
1. Fix critical issues (Priority 0)
2. Fix warnings (Priority 1)
3. Implement recommendations (Priority 2)
4. Re-run validation
5. Update this report
### If FAIL:
1. **STOP**: Feature is not production-ready
2. Address ALL critical issues
3. Fix major best practice violations
4. Re-run validation
5. DO NOT deploy until validation passes
---
## 10. Sign-Off
**Validation Completed By**: Claude Code
**Date**: [Date]
**Validation Version**: 1.0
**Recommendation**: [APPROVE / REVISE / REJECT]
Step 5.2: Save Report
# Save to docs/reports/
echo "[report-content]" > docs/reports/VALIDATION_[FEATURE_NAME]_[DATE].md
# Notify user
echo "✅ Validation report saved to: docs/reports/VALIDATION_[FEATURE_NAME]_[DATE].md"
Validation Result Interpretation
✅ PASS Criteria
All of the following must be true:
- ✅ 12/12 refactor workflow phases passed
- ✅ ALL 4 source comparison checks passed (models, parameters, logging, migrations)
- ✅ 8/8 best practice categories passed
- ✅ Zero critical issues
- ✅ Zero production errors (24h)
- ✅ Feature parity 100%
- ✅ All tests passing
- ✅ Documentation complete
Result: Feature is production-ready and follows all standards.
⚠️ NEEDS WORK Criteria
If any of:
- 1-3 critical issues found (NOT in source comparison)
- Feature parity 90-99%
- Minor best practice violations
- 1-5 production errors (24h)
- Missing some documentation
- Non-critical parameter differences (if documented)
Result: Address issues and re-validate. Feature can be deployed with plan to fix.
IMPORTANT: If source comparison issues found (models, parameters, logging, migrations), these MUST be fixed before deployment.
❌ FAIL Criteria
If any of:
- Model or endpoint mismatch from original source (CRITICAL)
- Parameter name/value mismatch from original source (CRITICAL)
- Missing Axiom logging from original source (CRITICAL)
- Incomplete Supabase migrations from original source (CRITICAL)
- 4+ critical issues found
- Feature parity < 90%
- Major best practice violations (no auth, SQL injection, etc.)
- 6+ production errors (24h)
- No tests
- Security issues found
Result: DO NOT DEPLOY. Fix critical issues and re-validate.
CRITICAL: Any source comparison failure is an automatic FAIL. The implementation MUST match the original source exactly for models, endpoints, parameters, logging patterns, and database schema.
Usage Examples
Example 1: Validate Parallel Frame Generator
# User request
"Validate the parallel frame generator refactoring"
# Skill workflow
1. Read feature files (app/suites/parallel-generation/*)
2. Check all 8 refactor workflow steps
3. Check all 8 best practice categories
4. Check production logs and performance
5. Generate report: docs/reports/VALIDATION_PARALLEL_GENERATOR_20251027.md
6. Report to user: "✅ PASS - Feature is production-ready"
Example 2: Validate After Refactoring
# User request
"I just finished refactoring the image generation feature. Can you validate it?"
# Skill workflow
1. Ask: "What files were changed?"
2. Read all changed files
3. Run comprehensive validation
4. Find 2 critical issues:
- Missing rate limiting
- Using deprecated model name
5. Generate report with NEEDS WORK status
6. Report to user with action items
Example 3: Validate with Agent Swarms (Recommended)
# User request
"Validate the parallel frame generation refactoring from the reference implementation"
# Skill workflow with agent swarms
## Phase 1: Setup (Main Process)
1. Read reference implementation path from user
2. Read all reference implementation files
3. Create source comparison matrix
4. Distribute validation work to 5 parallel agents
## Phase 2: Launch Agent Swarm (Parallel)
**Agent 1: Source & Architecture Validation** (Sequential within agent)
- Step 1: Extract models from reference vs implementation
- Step 2: Extract endpoints from reference vs implementation
- Step 3: Extract parameters from reference vs implementation
- Step 4: Check Axiom logging patterns
- Step 5: Verify Supabase migrations
- Result: Source comparison report with CRITICAL issues flagged
**Agent 2: Type System & API Validation** (Sequential within agent)
- Step 1: Validate TypeScript patterns
- Step 2: Check placeholder content (CRITICAL: TODO/FIXME, hardcoded secrets, console.log)
- Step 3: Check API route structure
- Step 4: Validate service layer
- Result: Type system report
**Agent 3: Frontend & UI/UX Validation** (Sequential within agent)
- Step 1: Validate React components
- Step 2: Check UI/UX match with reference
- Step 3: Verify feature parity
- Result: Frontend validation report
**Agent 4: Security & Error Handling** (Sequential within agent)
- Step 1: Check security patterns
- Step 2: Validate error handling
- Step 3: Verify RLS policies
- Result: Security audit report
**Agent 5: Production & Performance** (Sequential within agent)
- Step 1: Query Axiom for production errors
- Step 2: Check performance metrics
- Step 3: Verify deployment status
- Result: Production validation report
## Phase 3: Consolidation (Main Process)
1. Wait for all 5 agents to complete
2. Consolidate findings from all agents
3. Identify CRITICAL issues (especially source comparison)
4. Calculate pass/fail status
5. Generate comprehensive validation report
6. Save report to docs/reports/
7. Notify user with status and action items
## Example Result:
"❌ FAIL - 3 CRITICAL issues found in source comparison:
1. Model mismatch: Using `fal-ai/flux-pro` instead of `fal-ai/flux-dev`
2. Missing parameter: `seed` parameter not included
3. Missing Axiom logging: No logging at job completion point
Full report: docs/reports/VALIDATION_PARALLEL_GENERATOR_20251027.md"
Integration with Other Skills
This skill works with:
- refactor-feature - Use validate-refactoring after refactor-feature completes
- code-validator - More focused validation, this skill is comprehensive
- debug-nonlinear-editor - Use if validation finds production errors
- code-maintenance - Use to fix best practice violations found
Quick Reference Commands
# Check refactor workflow
grep -r "process\.env\." [feature-files] # Env vars
find app/api -name "route.ts" # Check routes
supabase migration list # Check migrations
grep -r "fal\.run" app/api/ # Check for direct API calls
# Check best practices
grep -r ": any" [feature-files] # Check for any types
grep -r "withAuth" [feature-api-routes] # Check auth
grep -r "validate.*\|assert" [feature] # Check validation
grep -r "test\|describe" [feature] # Check tests
# Check production
vercel ls --prod # Check deployment
mcp__axiom__queryApl() # Check errors
npm run build # Check build
# Generate report
# Follow Phase 5 steps
Remember: This is a validation-only skill. It identifies issues but does not fix them. Use refactor-feature or other skills to address identified issues.