| name | refactor-feature |
| description | ONLY use when user explicitly says "refactor" an existing feature, or provides reference code/implementation to reimplement with feature parity matching. NOT for new features, NOT for debugging, NOT for general development. User must explicitly request "refactor", "reimplement from reference", or "use swarms/parallel agents for refactoring". |
Feature Refactoring Skill
A systematic approach to refactoring features with agent swarms, ensuring feature parity, fixing bugs, and validating implementations thoroughly.
When to Use This Skill
ONLY use this skill when:
- User explicitly says "refactor" an existing feature (e.g., "refactor the video export feature")
- User provides reference code/implementation and asks to reimplement it with feature parity
- User explicitly requests "swarms" or "parallel agents" for refactoring work
DO NOT use this skill for:
- ❌ New feature development (use maintain-coding-standards instead)
- ❌ Debugging or fixing bugs (use debug-nonlinear-editor or debug-issues-md instead)
- ❌ Type system fixes or general development work
- ❌ "Complex features" that aren't refactors
- ❌ General improvements or enhancements
Key distinction: This skill is for REFACTORING (restructuring existing code) or REIMPLEMENTING from reference code, NOT for building new features or debugging.
IMPORTANT: Before using this skill, ensure Axiom is configured and working (see Phase 0).
Core Principles
- Always Use Agent Swarms for Complex Work - Launch multiple parallel agents for comprehensive diagnosis
- CLI & MCP First - Never ask users to check dashboards; use CLI/MCP tools
- Adapt to Repo Patterns (CRITICAL) - Take functionality from reference code but implement using THIS repo's established patterns:
- Infrastructure (MANDATORY): ALWAYS use Vercel (deployment) + Supabase (database/storage), NEVER other platforms even if reference uses them
- Queue Architecture: Use Supabase queues (
processing_jobstable), NOT direct AI API calls - Deployment: Vercel deployment patterns ONLY, ignore any Docker/Railway/other deployment from reference
- Database: Supabase ONLY (PostgreSQL with RLS), ignore any MongoDB/MySQL/other databases from reference
- Logging: Axiom logger (
axiomLogger), NOT console.log or other logging systems from reference - Error Tracking: Sentry patterns from this repo, NOT other error tracking from reference
- Testing: This repo's testing patterns and structure (Jest, React Testing Library)
- TypeScript: This repo's branded types, error handling, patterns
- API Routes: This repo's
withAuth, rate limiting, service layer patterns - Storage: Supabase Storage with user-scoped paths (
{user_id}/{project_id}/...), NOT localStorage, local fs, S3, or any other storage from reference (see/docs/STORAGE_GUIDE.md) - URL Handling: NEVER pass
supabase://URLs to browser (<img>,<video>,<audio>). UseSignedImage/SignedVideo/SignedAudiocomponents oruseSignedStorageUrlhook (see/docs/architecture/URL_HANDLING.md)
- Systematic Diagnosis - Follow data flow from frontend → API → database → workers
- Type System Integrity - Always verify TypeScript types match database schema
- Build Early, Build Often - Build after every major change to catch errors
- Reference Documentation MANDATORY - Create comprehensive reference document (Phase 1.1.5) documenting ALL models, endpoints, workflows, features BEFORE implementation; validate final product against this document (Phase 4.4.5)
- Feature Parity Validation - Compare FUNCTIONALITY with reference (not implementation details)
- Verify Axiom First - Always confirm Axiom logging works before starting
IMPORTANT DISTINCTION: When refactoring from reference code:
- ✅ DO: Copy the functionality, features, UI/UX, business logic
- ✅ DO: Adapt implementation to use THIS repo's patterns (queues, Axiom, Vercel, etc.)
- ❌ DON'T: Blindly copy implementation details that conflict with repo patterns
- ❌ DON'T: Use direct AI API calls if reference does (use queues instead)
- ❌ DON'T: Copy different logging patterns (use Axiom)
- ❌ DON'T: Copy different auth patterns (use this repo's withAuth)
- ❌ DON'T: Use localStorage or local fs for persistent data (use Supabase Storage with user-scoped paths)
- ❌ DON'T: Pass
supabase://URLs directly to browser media elements (use URL conversion components/hooks)
Workflow Phases
Phase 0: Axiom Setup and Verification (BEFORE STARTING)
CRITICAL: Always verify Axiom is configured and working BEFORE starting any refactoring work!
Axiom is the observability platform used for production logging, error tracking, and debugging. Without proper Axiom configuration, you cannot validate fixes or debug production issues.
Step 0.1: Verify Axiom Configuration
Check Environment Variables:
# Check if Axiom is configured
echo "AXIOM_TOKEN: ${AXIOM_TOKEN:0:10}..." # Should show first 10 chars
echo "AXIOM_DATASET: $AXIOM_DATASET" # Should show dataset name
# In .env.local, verify these exist:
grep "AXIOM_TOKEN" .env.local
grep "AXIOM_DATASET" .env.local
Required Environment Variables:
AXIOM_TOKEN- Axiom API token (get from https://app.axiom.co/settings/tokens)AXIOM_DATASET- Dataset name (usuallynonlinear-editororgenai-video-production)
If Missing:
- Go to Axiom Dashboard: https://app.axiom.co/
- Navigate to Settings → API Tokens
- Create new token with "Ingest" permission
- Add to
.env.local:AXIOM_TOKEN=xaat-your-token-here AXIOM_DATASET=nonlinear-editor - Restart dev server:
npm run dev
Step 0.2: Verify Axiom MCP Tools Work
Test Axiom Connection:
// List available datasets
await mcp__axiom__listDatasets();
// Expected: Returns array with 'nonlinear-editor', 'genai-video-production', etc.
// Query recent logs
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(1h)
| summarize count()
`,
});
// Expected: Returns count of recent log entries
// Get dataset schema
await mcp__axiom__getDatasetSchema({
datasetName: 'nonlinear-editor',
});
// Expected: Returns field names and types
If MCP Tools Fail:
- Check MCP configuration in Claude Code settings
- Verify Axiom token has correct permissions
- Verify dataset exists in Axiom dashboard
- Check network connectivity to Axiom API
Step 0.3: Verify Logging Works
Test Server Logging:
Create a test API route to verify server logging:
// app/api/test-logging/route.ts
import { serverLogger } from '@/lib/serverLogger';
import { NextResponse } from 'next/server';
export async function GET() {
serverLogger.info({ test: true, timestamp: Date.now() }, 'Axiom logging test');
return NextResponse.json({ ok: true });
}
Test it:
# Call test endpoint
curl http://localhost:3000/api/test-logging
# Check Axiom (wait ~30 seconds for ingestion)
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(2m)
| where ['message'] contains "Axiom logging test"
`
});
Expected: Should see the test log entry in Axiom
Test Browser Logging:
Add to any page component:
import { browserLogger } from '@/lib/browserLogger';
// In component
browserLogger.info({ test: true, page: window.location.pathname }, 'Browser logging test');
Check Axiom:
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(2m)
| where ['message'] contains "Browser logging test"
`,
});
Expected: Should see the browser log entry in Axiom
Step 0.4: Create Essential Axiom Monitors
Before starting refactoring, set up monitors to catch issues early:
# Use Axiom CLI or dashboard to create monitors:
# 1. Error Rate Monitor
Monitor Name: "High Error Rate"
Query:
['nonlinear-editor']
| where ['_time'] > ago(5m)
| where ['severity'] == "error" or ['level'] == "error"
| summarize error_count = count()
| where error_count > 10
Alert: Notify when error_count > 10 in 5 minutes
# 2. API Failure Monitor
Monitor Name: "API Failures"
Query:
['nonlinear-editor']
| where ['_time'] > ago(5m)
| where ['status'] >= 500
| summarize failure_count = count()
| where failure_count > 5
Alert: Notify when failure_count > 5 in 5 minutes
# 3. Rate Limit Fallback Monitor
Monitor Name: "Rate Limit Fallbacks"
Query:
['nonlinear-editor']
| where ['_time'] > ago(10m)
| where ['message'] contains "rate limit" and ['message'] contains "fallback"
| count
Alert: Notify when count > 0
Step 0.5: Axiom Quick Reference for Refactoring
Common Axiom Queries for Refactoring:
// 1. Check for recent errors
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(30m)
| where ['severity'] == "error"
| project ['_time'], ['message'], ['stack'], ['userId'], ['url']
| order by ['_time'] desc
| limit 50
`,
});
// 2. Check specific feature logs
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(1h)
| where ['message'] contains "your-feature-name"
| project ['_time'], ['level'], ['message']
| order by ['_time'] desc
`,
});
// 3. Check API endpoint performance
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(1h)
| where ['url'] contains "/api/your-endpoint"
| summarize
count(),
avg_duration = avg(['duration']),
p95 = percentile(['duration'], 95),
errors = countif(['status'] >= 400)
by bin(['_time'], 5m)
`,
});
// 4. Check rate limiting
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(30m)
| where ['message'] contains "rate limit"
| summarize count() by ['message']
| order by count_ desc
`,
});
Step 0.6: Axiom Verification Checklist
Before starting refactoring, verify:
-
AXIOM_TOKENenvironment variable set -
AXIOM_DATASETenvironment variable set - MCP Axiom tools can list datasets
- MCP Axiom tools can query logs
- Test server log appears in Axiom
- Test browser log appears in Axiom
- Error rate monitor created
- API failure monitor created
- Rate limit fallback monitor created
- Familiar with Axiom query syntax (APL)
If ANY item fails, stop and fix Axiom setup before proceeding!
Without working Axiom, you cannot:
- Debug production issues
- Validate fixes
- Monitor for errors during refactoring
- Track performance impacts
- Verify rate limiting works
Axiom is CRITICAL infrastructure - do not skip this phase!
Phase 1: Planning & Analysis (ALWAYS START HERE)
Use TodoWrite tool immediately to track the refactoring workflow.
Step 1.1: Launch Planning Agent Swarm
Launch 3-5 parallel agents to analyze different aspects:
Agent 1: Analyze reference implementation AND flag infrastructure differences
- Read reference code
- Document all features
- List API endpoints
- Identify dependencies
- **CRITICAL**: Check what infrastructure reference uses (deployment, database, storage, etc.)
- **FLAG**: Document ANY infrastructure that differs from Vercel + Supabase
- **WARN**: If reference uses Docker, Railway, MongoDB, S3, localStorage, etc.
- **NOTE**: We will adapt functionality but use ONLY Vercel + Supabase infrastructure
Agent 2: Analyze current implementation (if exists)
- Read current codebase
- Compare with reference
- Identify gaps
- List breaking changes
Agent 3: Check database schema
- Verify table structures
- Check enum types
- Validate constraints
- Ensure migrations applied
Agent 4: Check type definitions
- Verify TypeScript types
- Check branded types
- Validate API contracts
- Check Supabase types
Agent 5: Review related systems
- Check rate limiting
- Verify authentication
- Review error handling
- Check logging
Agent 6: Check for existing routes and queues
- Search app/api/ for duplicate routes
- List all existing generation endpoints
- Verify processing_jobs table and job types
- Check if feature already uses queues
- Identify routes that need queue migration
Agent 7: Verify environment variables (CRITICAL)
- Read current repo's .env.example
- Inventory all process.env usage in reference code
- Create variable mapping matrix (reference → current)
- Identify mismatched variable names (e.g., FAL_KEY vs FAL_API_KEY)
- Check for missing required variables
- Generate user notification if variables missing
- Update code to use correct variable names
Agent 8: Verify external API integrations with Firecrawl
- Identify all external APIs used (FAL AI, Replicate, OpenAI, etc.)
- Scrape latest documentation for each API
- Extract correct endpoints, parameters, auth methods
- Compare with current implementation
- Report any mismatches or deprecated features
Deliverable: External API Parity Matrix (MUST COMPLETE BEFORE IMPLEMENTATION)
- Table columns: `provider`, `base_url`, `endpoint_path`, `http_method`, `auth_header`, `required_params`, `optional_params`, `expected_response_keys`
- Populate entries directly from the reference implementation/documentation (NO assumptions)
- Flag any endpoint from the reference that is deprecated or undocumented so it can be escalated immediately
- Publish the matrix with the Phase 1 reference packet; all downstream phases must use it as the single source of truth for external calls
- Include `doc_source` and `doc_published_at` fields for each row so updates can be traced back to official November 2025 documentation
Step 1.1.5: Reference Documentation Agent (CRITICAL - MANDATORY)
ALWAYS create a comprehensive reference document BEFORE implementation!
This document becomes the single source of truth for validation. The final validation agent will compare the implemented feature against this document to ensure 100% parity.
Launch Dedicated Documentation Agent:
Task: Create Comprehensive Reference Implementation Documentation
OBJECTIVE: Document EVERY aspect of the reference implementation to serve as validation baseline.
AGENT INSTRUCTIONS:
1. READ REFERENCE CODE THOROUGHLY
- Read ALL files in reference implementation
- Identify main entry points (routes, components, pages)
- Map complete code structure
- Document file relationships and dependencies
2. DOCUMENT EXACT API CONTRACTS (CRITICAL - Prevents Hallucination)
**Extract EXACT model names/versions:**
```bash
# Search for model identifiers
grep -rEo "(fal-ai|ideogram|runway|replicate|openai|anthropic)/[a-zA-Z0-9_-]+" reference_code/
grep -rEio "v[0-9]+|gen-[0-9]+|gpt-[0-9]" reference_code/
Document in format:
### EXACT Models Used (MUST PRESERVE)
| Model Name | Exact String | Version | Used For | Parameters |
| ------------ | ----------------- | ------- | ------------- | ---------------------------------- |
| FAL Flux Dev | "fal-ai/flux-dev" | latest | Image gen | prompt, image_size, guidance_scale |
| Ideogram V3 | "ideogram-v3" | v3 | Text-to-image | prompt, aspect_ratio, style_type |
**CRITICAL**: These strings must be preserved EXACTLY in implementation. NO modifications allowed.
DOCUMENT EXACT ENDPOINTS
Extract all API endpoints:
grep -rEo "https?://[a-zA-Z0-9.-]+(/[a-zA-Z0-9/_-]*)?" reference_code/Document:
### EXACT API Endpoints (MUST PRESERVE) | Service | Base URL | Endpoint | Method | Auth | | -------- | ----------------------- | ---------------- | ------ | ------------------ | | FAL | https://fal.run/ | /fal-ai/flux-dev | POST | Authorization: Key | | Ideogram | https://api.ideogram.ai | /v3/generate | POST | Api-Key: |DOCUMENT COMPLETE USER WORKFLOW
Trace end-to-end user journey:
- Initial page load → what user sees
- First user action → what happens
- Each subsequent interaction → system response
- Final output → success/error states
Document as numbered steps:
### Complete User Workflow 1. User lands on /generate page - Sees: Input form with prompt field, model selector, generate button - UI: Grid layout, left sidebar with history, main canvas for preview 2. User enters prompt "a cat in space" - Character count shows 14/1000 - Generate button enables 3. User selects model "ideogram-v3" from dropdown - Dropdown shows: ["ideogram-v3", "fal-ai/flux-dev", "runway/gen3-alpha"] - Selection updates UI with model-specific options 4. User clicks "Generate" - Button disables, shows spinner - Toast: "Generation started" - Status polling begins (every 2s) 5. System processes (backend) - API call to /api/generate - Creates job in queue - Worker picks up job - Calls external API with EXACT model: "ideogram-v3" - Receives result 6. User sees result - Image appears in canvas - Download button appears - "Generate Another" button shown - Image added to history sidebar [Continue for EVERY possible user action and system response]DOCUMENT ALL FEATURES (100% Coverage)
Create exhaustive feature checklist:
### Feature Inventory (ALL must be implemented) #### Core Features - [ ] Text prompt input (min 1 char, max 1000 chars) - [ ] Model selection dropdown (3 models: ideogram-v3, fal-ai/flux-dev, runway/gen3-alpha) - [ ] Generate button (disabled when invalid) - [ ] Real-time generation status - [ ] Result preview (zoom, pan, download) - [ ] Generation history (last 10 items, persisted) #### UI/UX Elements - [ ] Left sidebar: 280px wide, history list - [ ] Main canvas: Flex-grow, responsive - [ ] Header: Logo, user avatar, settings - [ ] Footer: Credits, privacy, terms - [ ] Loading states: Skeleton, spinner, progress bar - [ ] Error states: Toast notifications, inline errors - [ ] Success states: Green checkmark, success toast #### Technical Features - [ ] Job queuing (Supabase processing_jobs) - [ ] Status polling (every 2s, max 5min) - [ ] Rate limiting (10 req/min) - [ ] Authentication (required) - [ ] Input validation (prompt length, model enum) - [ ] Error handling (API errors, timeouts, network issues) - [ ] Logging (Axiom at key points)DOCUMENT MODEL PARAMETERS (EXACT)
### Model Parameters (EXACT - NO MODIFICATIONS) **ideogram-v3:** - prompt: string (required) - aspect_ratio: "1:1" | "16:9" | "9:16" | "4:3" | "3:4" (default: "1:1") - style_type: "realistic" | "anime" | "3d" | "design" (default: "realistic") - magic_prompt_option: "AUTO" | "ON" | "OFF" (default: "AUTO") **fal-ai/flux-dev:** - prompt: string (required) - image_size: "square_hd" | "square" | "portrait_4_3" | "portrait_16_9" (default: "square_hd") - guidance_scale: number (1-20, default: 7.5) - num_inference_steps: number (1-50, default: 28)DOCUMENT UI LAYOUT & STYLING
### UI Layout (EXACT dimensions and styling) **Container:** - Layout: Grid with sidebar + main - Grid template: "280px 1fr" - Gap: 24px - Padding: 32px - Background: gradient from #f0f0f0 to #ffffff **Sidebar:** - Width: 280px - Height: 100vh - header - Background: white - Border-radius: 12px - Box-shadow: 0 2px 8px rgba(0,0,0,0.1) - Padding: 16px **Main Canvas:** - Flex: 1 - Min-height: 600px - Background: white - Border-radius: 12px - Display: flex, flex-direction: columnCREATE VALIDATION CHECKLIST
### Final Validation Checklist (100% Required) #### Model/Endpoint Preservation - [ ] "ideogram-v3" preserved exactly (not changed to ideogram-v2) - [ ] "fal-ai/flux-dev" preserved exactly - [ ] "runway/gen3-alpha" preserved exactly - [ ] API endpoint "https://api.ideogram.ai/v3/generate" exact - [ ] All parameter names match exactly (aspect_ratio, not aspectRatio) #### Feature Parity - [ ] All 47 features from inventory implemented - [ ] User workflow matches 100% (all 12 steps work) - [ ] UI layout matches exactly (dimensions, colors, spacing) - [ ] All error states handled identically - [ ] All success states match #### Axiom Logging - [ ] Log at generation start (with userId, prompt, model) - [ ] Log at job queue (with jobId, status) - [ ] Log at API call (with endpoint, model, parameters) - [ ] Log at result received (with imageUrl, duration) - [ ] Log on errors (with error message, stack trace) #### Storage Patterns - [ ] Images stored at: {user_id}/{project_id}/images/{timestamp}.png - [ ] NO localStorage for images/assets - [ ] Temp files cleaned up in finally blocks - [ ] Storage cleanup on DB insert failure #### Best Practices - [ ] withAuth middleware on all routes - [ ] Rate limiting (tier2_resource_creation) - [ ] Input validation (validateAll with validateString, validateEnum) - [ ] Error tracking (trackError with category, context) - [ ] Try/catch on all async operations
OUTPUT: Save as docs/reports/REFACTORING_REFERENCE_[FEATURE_NAME].md
This document is MANDATORY and will be used by the Final Validation Agent.
**After Agent Completes:**
1. Review the reference document for completeness
2. Verify ALL models/endpoints are documented with exact strings
3. Verify complete user workflow is captured (no steps missing)
4. Verify ALL features are listed (100% coverage)
5. Save document in `docs/reports/` for reference during implementation
6. **Pass this document to ALL implementation agents**
7. **Pass this document to the Final Validation Agent in Phase 4**
**Critical Success Criteria:**
- [ ] Reference document created and saved
- [ ] All model names documented with EXACT strings (including versions)
- [ ] All API endpoints documented with full URLs
- [ ] Complete user workflow documented (every step)
- [ ] All features inventoried (100% coverage)
- [ ] All parameters documented with exact keys and valid ranges
- [ ] UI layout documented with exact dimensions
- [ ] Validation checklist created for Phase 4
- [ ] Document location: `docs/reports/REFACTORING_REFERENCE_[FEATURE_NAME].md`
**This reference document prevents:**
- ❌ Model name hallucination (ideogram-v3 → ideogram-v2)
- ❌ Missing features
- ❌ Workflow deviations
- ❌ Parameter name changes
- ❌ Endpoint URL modifications
**This reference document enables:**
- ✅ 100% feature parity validation
- ✅ Exact model preservation
- ✅ Complete workflow reproduction
- ✅ Automated final validation
#### Step 1.2: Create Feature Parity Matrix
Document ALL **FUNCTIONALITY** from reference implementation (not implementation details):
**What to extract from reference:**
- ✅ Features and capabilities (what it does)
- ✅ UI/UX elements and layout (how it looks)
- ✅ User flows and interactions (how users use it)
- ✅ Business logic and validation rules
- ✅ Models used and their parameters (adapt to repo patterns)
- ❌ NOT: Specific implementation details (API structure, logging method, queue vs direct calls)
- ❌ NOT: Infrastructure patterns (deployment, monitoring tools)
- ❌ NOT: Auth/rate limiting implementation (use this repo's patterns)
```markdown
## Feature Parity Matrix
| Feature | Reference | Current | Status | Priority | Implementation Notes |
| ---------------------------- | --------- | ------- | -------- | -------- | ------------------------------------------------------- |
| Chat interface | ✓ | ✗ | Missing | P0 | Use repo's chat patterns |
| Model selection | ✓ | ✓ | Complete | - | Uses fal-ai/flux-dev (same model) |
| Image upload | ✓ | ✗ | Missing | P1 | Use repo's Supabase storage patterns |
| Image generation | ✓ | ✗ | Missing | P0 | **ADAPT**: Use Supabase queue, NOT direct API calls |
| Axiom logging at key points | ✓ | ✗ | Missing | P0 | Use repo's axiomLogger (same log points, repo's format) |
| ...continue for ALL features |
Key Principle: Extract WHAT the feature does, then implement using THIS repo's HOW.
Step 1.2.5: Extract API Contract (CRITICAL - Prevents Model/Endpoint Hallucination)
MANDATORY: Extract exact model names, endpoints, and parameters BEFORE implementation!
This step prevents the common issue where agents change model versions (e.g., ideogram-v3 → ideogram-v2) or endpoints during refactoring.
What to Extract:
Model Names and Versions (EXACT strings):
- Search for all AI model identifiers
- Extract complete model names with versions
- Examples:
"fal-ai/flux-dev","ideogram-v3","runway-gen3","gpt-4-turbo"
API Endpoints (EXACT URLs):
- Extract all API base URLs
- Extract all endpoint paths
- Examples:
"https://api.fal.ai/","/v1/generations"
Model Parameters (EXACT keys and ranges):
- Extract all parameter names
- Document valid value ranges
- Examples:
image_size: "1024x1024",guidance_scale: 7.5
Authentication Methods:
- API key headers
- Bearer tokens
- OAuth flows
Extraction Workflow:
# 1. Search for model identifiers with version patterns
grep -rEo "[a-zA-Z0-9_-]+/(v[0-9]+|[a-z]+-v[0-9]+|[a-z]+-[a-z]+)" reference_code/ | sort -u
# 2. Search for version suffixes in model names
grep -rEio "ideogram[- ]v[0-9]|flux[- ](dev|pro)|runway[- ]gen[0-9]|gpt-[0-9]" reference_code/ | sort -u
# 3. Extract API URLs
grep -rEo "https?://[a-zA-Z0-9.-]+(/[a-zA-Z0-9/_-]*)?" reference_code/ | sort -u
# 4. Find model parameter keys
grep -rEo "(image_size|guidance_scale|num_inference_steps|prompt|negative_prompt|aspect_ratio|style_type|duration|resolution)" reference_code/ | sort -u
# 5. Search for specific model names
grep -ri "ideogram\|flux\|runway\|replicate\|openai\|anthropic\|fal-ai" reference_code/ | grep -v node_modules
API Contract Document Template:
Create this document and reference it during implementation:
## API Contract (MUST PRESERVE EXACTLY)
### Models Used
| Model Name | Exact String | Used For | Parameters |
| ------------ | ------------------- | ---------------- | ---------------------------------- |
| FAL Flux Dev | "fal-ai/flux-dev" | Image generation | prompt, image_size, guidance_scale |
| Ideogram V3 | "ideogram-v3" | Text-to-image | prompt, aspect_ratio, style_type |
| Runway Gen-3 | "runway/gen3-alpha" | Video generation | prompt, duration, resolution |
| GPT-4 Turbo | "gpt-4-turbo" | Chat completion | messages, temperature, max_tokens |
### API Endpoints
| Service | Base URL | Endpoint Path | Method | Auth Header |
| -------- | ------------------------ | ---------------- | ------ | --------------------- |
| FAL | https://fal.run/ | /fal-ai/flux-dev | POST | Authorization: Key... |
| Ideogram | https://api.ideogram.ai | /v3/generate | POST | Api-Key: ... |
| Runway | https://api.runwayml.com | /v1/generations | POST | Authorization: Bearer |
### Model Parameters (EXACT)
**FAL Flux Dev:**
- `prompt`: string (required)
- `image_size`: "square_hd" | "square" | "portrait_4_3" | "portrait_16_9" | "landscape_4_3" | "landscape_16_9"
- `guidance_scale`: number (1-20, default: 7.5)
- `num_inference_steps`: number (1-50, default: 28)
- `enable_safety_checker`: boolean (default: true)
**Ideogram V3:**
- `prompt`: string (required)
- `aspect_ratio`: "1:1" | "16:9" | "9:16" | "4:3" | "3:4"
- `style_type`: "realistic" | "anime" | "3d" | "design"
- `magic_prompt_option`: "AUTO" | "ON" | "OFF"
[... continue for ALL models ...]
### CRITICAL PRESERVATION RULES
1. **NEVER modify model names** - Use EXACT strings from this contract
2. **NEVER change endpoints** - Use EXACT URLs from this contract
3. **NEVER alter parameter names** - Use EXACT keys from this contract
4. **ONLY adapt**: Queue architecture, logging, auth, rate limiting (repo patterns)
5. **VERIFY** after implementation that all model names/endpoints match exactly
Attach the External API Parity Matrix:
- Reuse the matrix built in Phase 1.1 (provider/base URL/path/method/auth/params/response keys)
- Confirm every row is sourced from the reference implementation or official docs dated November 2025 or newer (record doc URL + publish date per row)
- Leave internal API routes out of this matrix; only external upstream calls belong here
- Mark the matrix as the authoritative contract for Phase 2 implementation and Phase 4 validation
Validation Checklist:
After extraction, verify:
- ALL model names extracted with exact version strings
- ALL API endpoints documented with full URLs
- ALL model parameters documented with valid ranges
- Auth methods identified for each API
- Request/response schemas documented
- Created API Contract document to reference during implementation
- Marked critical items as "MUST PRESERVE EXACTLY"
Step 1.3: Identify Integration Points
Map out all integration points:
- API routes and their paths
- Database tables and columns
- External services (AI models, storage)
- Authentication flows
- Rate limiting tiers
- WebSocket connections (if any)
Step 1.3.5: Identify Anti-Patterns to Avoid (CRITICAL)
MANDATORY: Identify bad patterns from reference code that should NOT be copied!
Reference code may contain patterns that conflict with this repo's best practices. Explicitly identify these BEFORE implementation to avoid copying them.
Common Anti-Patterns in Reference Code:
1. localStorage for Persistent Data
// ❌ AVOID (from reference code):
localStorage.setItem('userData', JSON.stringify(user));
localStorage.setItem('projectData', JSON.stringify(project));
localStorage.setItem('assets', JSON.stringify(assets));
// ✅ USE (this repo's pattern):
await supabase.from('users').insert(user);
await supabase.from('projects').insert(project);
await supabase.storage.from('assets').upload(path, file);
Search: grep -r "localStorage\\.setItem\|sessionStorage\\.setItem" reference_code/
If found: Mark as "DO NOT COPY - Use Supabase Database/Storage instead"
2. Direct AI API Calls (Not Queued)
// ❌ AVOID (from reference code):
export async function POST(request: Request) {
const result = await fal.run('fal-ai/flux-dev', { prompt });
return NextResponse.json({ image: result.images[0].url });
}
// ✅ USE (this repo's pattern):
export const POST = withAuth(async (request, { user, supabase }) => {
const { data: job } = await supabase
.from('processing_jobs')
.insert({
user_id: user.id,
job_type: 'image_generation',
input_data: { prompt },
})
.select()
.single();
return successResponse({ jobId: job.id, status: 'queued' });
});
Search: grep -r "await fal\\.run\|await replicate\\.run\|await openai\\.chat" reference_code/app/api/
If found: Mark as "DO NOT COPY - Use Supabase queues instead"
3. Missing Error Handling
// ❌ AVOID (from reference code):
const result = await fetch(url);
const data = await result.json();
// ✅ USE (this repo's pattern):
try {
const result = await fetch(url);
if (!result.ok) {
throw new Error(`API error: ${result.statusText}`);
}
const data = await result.json();
} catch (error) {
trackError(error, {
category: ErrorCategory.API,
context: { url },
});
throw error;
}
Search: grep -A 5 "await fetch" reference_code/ | grep -v "catch"
If found: Mark as "DO NOT COPY - Add try/catch with trackError"
4. Hardcoded Credentials
// ❌ AVOID (from reference code):
const apiKey = 'sk-1234567890abcdef';
const supabaseUrl = 'https://example.supabase.co';
// ✅ USE (this repo's pattern):
const apiKey = process.env.FAL_API_KEY;
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL;
Search: grep -r "sk-\|pk_\|https://.*\\.supabase\\.co" reference_code/
If found: Mark as "DO NOT COPY - Use environment variables"
5. console.log Instead of Structured Logging
// ❌ AVOID (from reference code):
console.log('User created:', user);
console.error('Error:', error);
// ✅ USE (this repo's pattern):
serverLogger.info({ userId: user.id, email: user.email }, 'User created');
trackError(error, {
category: ErrorCategory.DATABASE,
context: { operation: 'createUser' },
});
Search: grep -r "console\\.log\|console\\.error\|console\\.warn" reference_code/
If found: Mark as "DO NOT COPY - Use serverLogger/browserLogger"
6. Missing Input Validation
// ❌ AVOID (from reference code):
const prompt = body.prompt;
const userId = body.userId;
// ✅ USE (this repo's pattern):
const validation = validateAll([
validateString(body.prompt, 'prompt', { minLength: 1, maxLength: 1000 }),
validateUUID(body.userId, 'userId'),
]);
if (!validation.valid) {
return validationError(validation.errors[0]?.message);
}
Search: grep -A 10 "body\\..*=" reference_code/app/api/ | grep -v "validate"
If found: Mark as "DO NOT COPY - Add input validation"
7. Missing Rate Limiting
// ❌ AVOID (from reference code):
export async function POST(request: Request) {
// No rate limiting
}
// ✅ USE (this repo's pattern):
export const POST = withAuth(handler, {
route: '/api/generate',
rateLimit: RATE_LIMITS.tier2_resource_creation,
});
Search: grep -L "rateLimit\|checkRateLimit" reference_code/app/api/**/route.ts
If found: Mark as "DO NOT COPY - Add rate limiting"
8. Missing Authentication
// ❌ AVOID (from reference code):
export async function POST(request: Request) {
// No auth check
}
// ✅ USE (this repo's pattern):
export const POST = withAuth(handler, {
route: '/api/generate',
rateLimit: RATE_LIMITS.tier2_resource_creation,
});
Search: grep -L "withAuth\|requireAuth" reference_code/app/api/**/route.ts
If found: Mark as "DO NOT COPY - Add withAuth middleware"
9. Direct supabase:// URLs in Browser Elements
// ❌ AVOID (from reference code):
<img src="supabase://assets/user/image.jpg" />
<video src={asset.storage_url} /> // if storage_url is supabase://
// ✅ USE (this repo's pattern):
import { SignedImage, SignedVideo } from '@/components/generation/SignedMediaComponents';
<SignedImage src="supabase://assets/user/image.jpg" alt="Asset" ttlSeconds={3600} />
<SignedVideo src={asset.storage_url} controls ttlSeconds={3600} />
// OR use hook for custom components:
const { url, loading, error } = useSignedStorageUrl(asset.storage_url, 3600);
if (!url) return <LoadingSpinner />;
return <img src={url} alt="Asset" />;
Why it's wrong: Browsers cannot fetch supabase:// URLs. They must be converted to signed HTTPS URLs.
Search: grep -r "src=.*supabase://\|src={.*storage_url" reference_code/
If found: Mark as "DO NOT COPY - Use SignedImage/SignedVideo/SignedAudio or useSignedStorageUrl hook"
See: /docs/architecture/URL_HANDLING.md for complete guide
Anti-Pattern Detection Checklist:
- Searched for localStorage usage (persistent data)
- Searched for direct AI API calls (not queued)
- Searched for missing error handling
- Searched for hardcoded credentials
- Searched for console.log usage
- Searched for missing input validation
- Searched for missing rate limiting
- Searched for missing authentication
- Searched for direct supabase:// URLs in browser elements
Create Anti-Pattern Report:
## Reference Code Anti-Patterns to AVOID
| Anti-Pattern | Found In | This Repo's Pattern | Priority |
| ----------------------- | ------------------------- | ----------------------- | -------- |
| localStorage for assets | components/AssetGrid.tsx | Supabase Storage | P0 |
| Direct FAL API call | app/api/generate/route.ts | Supabase queues | P0 |
| console.log | lib/services/\*.ts | serverLogger | P1 |
| Hardcoded API key | lib/fal.ts | process.env.FAL_API_KEY | P0 |
### Implementation Rule:
**COPY**: Functionality, UI/UX, business logic
**ADAPT**: Implementation to use this repo's patterns
**AVOID**: All patterns listed in this anti-pattern report
Pass this report to implementation agents to prevent copying anti-patterns!
Step 1.4: Check for Existing Routes (PREVENT DUPLICATION)
CRITICAL: Always check for existing routes before creating new ones!
# 1. Search for existing API routes
find app/api -name "route.ts" -o -name "route.js"
# 2. Search for specific endpoint pattern
grep -r "export async function POST" app/api/
grep -r "export async function GET" app/api/
# 3. Check for similar feature names
ls -la app/api/ | grep -i "generation"
ls -la app/api/ | grep -i "ai"
ls -la app/api/ | grep -i "frame"
Route Duplication Checklist:
- Searched
app/api/for similar route names - Checked for existing endpoints with same functionality
- Verified no duplicate POST/GET handlers for same path
- Reviewed existing routes to see if they can be extended
- Documented any routes that will be deprecated/replaced
Common Route Patterns to Check:
// Generation routes
app/api/generate/*
app/api/generation/*
app/api/ai/generate/*
// Frame generation
app/api/frames/*
app/api/frame-generation/*
app/api/generate/frames/*
// Model-specific routes
app/api/fal/*
app/api/replicate/*
app/api/huggingface/*
// Processing routes
app/api/process/*
app/api/processing/*
app/api/jobs/*
If Route Already Exists:
Option A: Extend Existing Route
// Add new functionality to existing handler // Update existing validation/types // Add new business logic to service layerOption B: Deprecate and Replace
// Create migration plan // Add deprecation warnings to old route // Update frontend to use new route // Remove old route after transition periodOption C: Create Sub-route
// If existing route is generic, create specific sub-route // Example: /api/generate/frames/parallel (new) // /api/generate/frames (existing)
Step 1.5: Environment Variable Verification (CRITICAL)
ALWAYS verify environment variables match the current repo before refactoring!
When refactoring code from another repository or reference implementation, environment variable names may differ. Using incorrect variable names will cause runtime failures.
Environment Variable Verification Workflow:
Step 1: Read Current Repo's Environment Variables
# Check .env.example for all variables
cat .env.example
# List all required variables
grep -E "^[A-Z_]+=" .env.example | cut -d= -f1 | sort
# Check .env.local exists
ls -la .env.local
Step 2: Inventory Reference Code's Environment Variables
Search reference code for environment variable usage:
# Find all process.env usage
grep -r "process\.env\." reference_code/ | grep -v node_modules
# Extract unique variable names
grep -ro "process\.env\.[A-Z_]*" reference_code/ | \
sed 's/process\.env\.//' | \
sort -u
# Check for common patterns
grep -r "GOOGLE_\|FAL_\|REPLICATE_\|OPENAI_\|STRIPE_\|SUPABASE_" reference_code/
Step 3: Create Variable Mapping Matrix
Document ALL environment variables from reference vs current repo:
| Reference Variable | Current Repo Variable | Status | Action Required |
|---|---|---|---|
DALLE_API_KEY |
OPENAI_API_KEY |
❌ Diff | Replace in code |
FAL_KEY |
FAL_API_KEY |
❌ Diff | Replace in code |
GOOGLE_CREDENTIALS |
GOOGLE_SERVICE_ACCOUNT |
❌ Diff | Replace in code |
SUPABASE_URL |
NEXT_PUBLIC_SUPABASE_URL |
❌ Diff | Replace + add PUBLIC |
AXIOM_TOKEN |
AXIOM_TOKEN |
✅ Match | No change needed |
CUSTOM_VAR_FROM_REF |
(not in current repo) | ❌ New | Add to .env.example |
STRIPE_SECRET_KEY |
STRIPE_SECRET_KEY |
✅ Match | No change needed |
REPLICATE_KEY |
REPLICATE_API_KEY |
❌ Diff | Replace in code |
GEMINI_KEY |
AISTUDIO_API_KEY or GEMINI_API_KEY |
❌ Diff | Choose correct variable |
Step 4: Validate ALL Required Variables Exist
Check current repo's .env.example for required variables:
// Required variables from .env.example
const REQUIRED_VARS = [
'NEXT_PUBLIC_SUPABASE_URL',
'NEXT_PUBLIC_SUPABASE_ANON_KEY',
'SUPABASE_SERVICE_ROLE_KEY',
'STRIPE_SECRET_KEY',
'STRIPE_WEBHOOK_SECRET',
'STRIPE_PREMIUM_PRICE_ID',
];
// Recommended variables
const RECOMMENDED_VARS = [
'NEXT_PUBLIC_BASE_URL',
'GOOGLE_SERVICE_ACCOUNT', // or AISTUDIO_API_KEY
'AXIOM_TOKEN',
'AXIOM_DATASET',
];
// Feature-specific variables
const FEATURE_SPECIFIC_VARS = {
'image-generation': ['OPENAI_API_KEY', 'FAL_API_KEY'],
'video-generation': ['GOOGLE_SERVICE_ACCOUNT', 'FAL_API_KEY'],
'audio-generation': ['ELEVENLABS_API_KEY', 'COMET_API_KEY'],
'parallel-generation': ['OPENAI_API_KEY', 'FAL_API_KEY', 'GOOGLE_SERVICE_ACCOUNT'],
};
Step 5: Check for Missing Variables
# Check which variables are missing from .env.local
while IFS= read -r var; do
if ! grep -q "^${var}=" .env.local 2>/dev/null; then
echo "❌ MISSING: $var"
else
echo "✅ PRESENT: $var"
fi
done < <(grep -E "^[A-Z_]+=" .env.example | cut -d= -f1)
Step 6: Update Code to Use Correct Variable Names
For each variable mismatch, update the code:
// ❌ WRONG (from reference repo)
const apiKey = process.env.FAL_KEY;
const credentials = process.env.GOOGLE_CREDENTIALS;
const supabaseUrl = process.env.SUPABASE_URL;
// ✅ CORRECT (current repo)
const apiKey = process.env.FAL_API_KEY;
const credentials = process.env.GOOGLE_SERVICE_ACCOUNT;
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL;
Common Variable Name Patterns in This Repo:
// Current repo patterns (from .env.example):
// Supabase
NEXT_PUBLIC_SUPABASE_URL; // Public variables need NEXT_PUBLIC_ prefix
NEXT_PUBLIC_SUPABASE_ANON_KEY; // Public key (client-side)
SUPABASE_SERVICE_ROLE_KEY; // Private key (server-side)
// API Keys - standardized format
FAL_API_KEY; // NOT: FAL_KEY
OPENAI_API_KEY; // NOT: DALLE_API_KEY or OPENAI_KEY
REPLICATE_API_KEY; // NOT: REPLICATE_KEY
ELEVENLABS_API_KEY; // NOT: ELEVENLABS_KEY
// Google Cloud
GOOGLE_SERVICE_ACCOUNT; // Full JSON credentials
AISTUDIO_API_KEY; // For Gemini API (alternative)
// NOT: GOOGLE_CREDENTIALS, GEMINI_KEY, GOOGLE_API_KEY
// Axiom
AXIOM_TOKEN; // NOT: AXIOM_API_KEY
AXIOM_DATASET; // NOT: AXIOM_DATABASE
// Stripe
STRIPE_SECRET_KEY;
STRIPE_WEBHOOK_SECRET;
STRIPE_PREMIUM_PRICE_ID;
// NOT: STRIPE_KEY, STRIPE_API_KEY
Step 7: Notify User of Missing Variables
Create a comprehensive notification for the user:
// Generate missing variables report
const missingVariables = {
REQUIRED: ['NEXT_PUBLIC_SUPABASE_URL', 'FAL_API_KEY'], // Example
RECOMMENDED: ['AXIOM_TOKEN'],
FEATURE_SPECIFIC: ['GOOGLE_SERVICE_ACCOUNT'],
};
// Notify user
console.log(`
⚠️ ENVIRONMENT VARIABLE CHECK FAILED
The following environment variables are missing or incorrect:
REQUIRED VARIABLES (Application will not work without these):
${missingVariables.REQUIRED.map((v) => ` ❌ ${v}`).join('\n')}
RECOMMENDED VARIABLES (Strongly recommended for production):
${missingVariables.RECOMMENDED.map((v) => ` ⚠️ ${v}`).join('\n')}
FEATURE-SPECIFIC VARIABLES (Required for this feature):
${missingVariables.FEATURE_SPECIFIC.map((v) => ` ❌ ${v}`).join('\n')}
ACTION REQUIRED:
1. Copy .env.example to .env.local:
cp .env.example .env.local
2. Add the missing variables to .env.local:
${missingVariables.REQUIRED.concat(missingVariables.FEATURE_SPECIFIC)
.map((v) => ` ${v}=your_value_here`)
.join('\n ')}
3. See .env.example for detailed instructions on obtaining these values
4. Validate your environment:
npm run validate:env
For detailed setup instructions, see:
- .env.example (comprehensive variable list)
- ENVIRONMENT_VARIABLES.md (detailed documentation)
- docs/setup/ (service-specific setup guides)
`);
Step 8: Add New Variables to .env.example (if needed)
If reference code requires variables not in current repo:
# Add to .env.example with documentation
cat >> .env.example << 'EOF'
# -----------------------------------------------------------------------------
# [New Feature Name]
# Get from: [URL to obtain key]
# -----------------------------------------------------------------------------
# Description of what this variable does
# NEW_VARIABLE_NAME=
EOF
Environment Variable Verification Checklist:
Before proceeding with refactoring, verify:
- Read current repo's .env.example
- Inventoried all environment variables in reference code
- Created variable mapping matrix (reference → current)
- Identified all variable name differences
- Checked for missing required variables
- Checked for missing feature-specific variables
- Updated code to use correct variable names
- User notified of ALL missing variables
- Added instructions for obtaining missing variables
- Added new variables to .env.example (if needed)
- Validated environment with
npm run validate:env - Documented any new variables in ENVIRONMENT_VARIABLES.md
Common Environment Variable Issues:
| Issue | Symptom | Fix |
|---|---|---|
Using FAL_KEY instead of FAL_API_KEY |
FAL API calls fail with auth error | Replace all process.env.FAL_KEY with FAL_API_KEY |
Using GOOGLE_CREDENTIALS instead of GOOGLE_SERVICE_ACCOUNT |
Google Cloud calls fail | Replace variable name and ensure JSON format |
Missing NEXT_PUBLIC_ prefix |
Variable undefined on client-side | Add NEXT_PUBLIC_ prefix for client-side vars |
Using GEMINI_KEY from reference |
Gemini calls fail | Use AISTUDIO_API_KEY or GEMINI_API_KEY instead |
| Variable exists but wrong format | API fails with invalid credentials | Check .env.example for correct format |
Automated Variable Validation Script:
#!/bin/bash
# validate-env-vars.sh - Run before refactoring
echo "🔍 Validating environment variables..."
# Check .env.local exists
if [ ! -f .env.local ]; then
echo "❌ .env.local not found!"
echo " Run: cp .env.example .env.local"
exit 1
fi
# Check each required variable
REQUIRED_VARS=(
"NEXT_PUBLIC_SUPABASE_URL"
"NEXT_PUBLIC_SUPABASE_ANON_KEY"
"SUPABASE_SERVICE_ROLE_KEY"
"STRIPE_SECRET_KEY"
)
MISSING=()
for var in "${REQUIRED_VARS[@]}"; do
if ! grep -q "^${var}=.\\+" .env.local; then
MISSING+=("$var")
echo "❌ MISSING: $var"
else
echo "✅ FOUND: $var"
fi
done
if [ ${#MISSING[@]} -gt 0 ]; then
echo ""
echo "❌ ${#MISSING[@]} required variables are missing!"
echo " Please add them to .env.local"
exit 1
fi
echo ""
echo "✅ All required variables present!"
If Variables Are Missing:
- STOP refactoring immediately
- Notify user with clear instructions
- Provide links to obtain each missing variable
- Wait for user to add variables to .env.local
- Run validation again
- Only proceed when all required variables exist
Example User Notification:
⚠️ ENVIRONMENT SETUP REQUIRED
Before we can proceed with refactoring, you need to add the following environment variables:
## Required Variables
1. **FAL_API_KEY** - AI image/video generation
- Get from: https://fal.ai/dashboard/keys
- Add to .env.local: `FAL_API_KEY=your_key_here`
2. **GOOGLE_SERVICE_ACCOUNT** - Google Cloud AI services
- Get from: https://console.cloud.google.com/iam-admin/serviceaccounts
- Add to .env.local: `GOOGLE_SERVICE_ACCOUNT={"type":"service_account",...}`
3. **AXIOM_TOKEN** - Production logging
- Get from: https://app.axiom.co/settings/tokens
- Add to .env.local: `AXIOM_TOKEN=xaat-...`
## Setup Steps
1. Copy the example file:
```bash
cp .env.example .env.local
```
Open .env.local and add your values
Validate your setup:
npm run validate:envLet me know when you're done, and I'll continue with the refactoring
For detailed instructions, see:
- .env.example (line-by-line documentation)
- ENVIRONMENT_VARIABLES.md (comprehensive guide)
---
#### Step 1.6: Verify Supabase Queues for AI Generation
**CRITICAL: All AI generation features MUST use Supabase queues, not direct API calls!**
**Architecture Pattern:**
Frontend → API Route → Supabase Queue → Worker Picks Up Job → AI Service → Update Job Status
**Queue Verification Checklist:**
```bash
# 1. Check if processing_jobs table exists
supabase db remote sql "
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'processing_jobs'
"
# 2. Check job_type enum includes your feature
supabase db remote sql "
SELECT enumlabel
FROM pg_enum
JOIN pg_type ON pg_enum.enumtypid = pg_type.oid
WHERE pg_type.typname = 'job_type'
ORDER BY enumlabel
"
# 3. Check if queue table exists (if using pg_mq)
supabase db remote sql "
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name LIKE '%queue%'
"
# 4. List all job types currently in use
supabase db remote sql "
SELECT DISTINCT job_type, COUNT(*) as count
FROM processing_jobs
GROUP BY job_type
ORDER BY count DESC
"
Required Queue Components:
Database Migration - Add job type to enum
-- Example: supabase/migrations/YYYYMMDD_add_job_type.sql ALTER TYPE job_type ADD VALUE IF NOT EXISTS 'your_feature_name';Job Creation in API Route
// ✅ CORRECT: Create job in queue const { data: job, error } = await supabase .from('processing_jobs') .insert({ user_id: userId, job_type: 'your_feature_name', status: 'pending', input_data: { /* params */ }, }) .select() .single(); return NextResponse.json({ jobId: job.id, status: 'queued' }); // ❌ WRONG: Direct AI API call in route handler const result = await fetch('https://api.fal.ai/...'); // NEVER DO THIS!Worker Implementation
// lib/workers/aiGenerationWorker.ts export async function processAIGenerationJob(job: ProcessingJob) { try { // Update status to processing await updateJobStatus(job.id, 'processing'); // Call AI service const result = await aiService.generate(job.input_data); // Update with results await updateJobStatus(job.id, 'completed', { output_data: result, }); } catch (error) { await updateJobStatus(job.id, 'failed', { error_message: error.message, }); } }Frontend Polling/WebSocket
// Frontend checks job status const { data: job } = await supabase .from('processing_jobs') .select('*') .eq('id', jobId) .single(); if (job.status === 'completed') { // Use results from job.output_data }
AI Generation Tools Requiring Queues:
- Frame generation (FAL, Replicate, etc.)
- Video generation
- Image-to-image transformation
- Text-to-image generation
- Audio generation
- Upscaling/enhancement
- Background removal
- Style transfer
- Any long-running AI operation (>5 seconds)
Why Queues Are Required:
- Vercel Timeout - API routes have 10s timeout (Hobby) / 60s (Pro)
- User Experience - Don't block UI waiting for AI response
- Reliability - Jobs can be retried if they fail
- Scalability - Workers can process jobs in parallel
- Monitoring - Track job status, errors, and performance
- Cost Control - Rate limit AI API calls at queue level
Queue Migration Example:
If you find direct AI API calls in routes, migrate them:
// BEFORE (❌ WRONG):
export async function POST(request: Request) {
const body = await request.json();
const result = await fal.run('fal-ai/flux/dev', { prompt: body.prompt });
return NextResponse.json({ image: result.images[0].url });
}
// AFTER (✅ CORRECT):
export async function POST(request: Request) {
const userId = await getUserId(request);
const body = await request.json();
// Create job in queue
const { data: job } = await supabase
.from('processing_jobs')
.insert({
user_id: userId,
job_type: 'image_generation',
status: 'pending',
input_data: { prompt: body.prompt },
})
.select()
.single();
// Return job ID immediately
return NextResponse.json({
jobId: job.id,
status: 'queued',
message: 'Job queued for processing',
});
}
Phase 2: Implementation
Step 2.1: Implement Core Features
Use sequential implementation with validation after each step:
- Implement data models (TypeScript interfaces, database migrations)
- Implement storage patterns (user-scoped paths, cleanup on failure - see
/docs/STORAGE_GUIDE.md) - Implement API routes (with proper middleware: auth, rate limiting, validation)
- Implement frontend components (following existing patterns)
- Implement hooks/services (business logic, state management)
- Implement workers (background job processing)
External API Implementation Gate (MANDATORY):
- Before writing or modifying any external API call, open the External API Parity Matrix
- Verify the base URL, endpoint path, HTTP method, auth header, and parameter names match the matrix exactly
- If the matrix and current official docs disagree, pause implementation and update the matrix with the November 2025 spec (including
doc_source+doc_published_at) before proceeding - Document any intentional divergence (e.g., deprecated endpoint replaced) directly in the matrix so Phase 4 can re-validate the change
Storage Implementation Checklist:
- All uploads use Supabase Storage with user-scoped paths:
{user_id}/{project_id}/... - Storage cleanup on database insert failure
- Temporary files use
os.tmpdir()withfinallyblock cleanup - NO localStorage for persistent data (use Supabase Database)
- GCS only for AI processing with user-scoped paths and cleanup
- localStorage validation: try/catch + NaN checks + cleanup
After EACH major component:
npm run build
npm run lint
Step 2.2: Rate Limit Verification (CRITICAL)
ALWAYS verify rate limit consistency across ALL layers:
// 1. Check database function signature (in latest migration)
CREATE OR REPLACE FUNCTION increment_rate_limit(
p_rate_key text, -- ✅ Parameter name with p_ prefix
p_window_seconds integer DEFAULT 60
)
RETURNS integer -- ✅ Return type: single integer, NOT array
// 2. Check types/supabase.ts matches EXACTLY
increment_rate_limit: {
Args: {
p_rate_key: string; // ✅ Must match migration parameter name
p_window_seconds?: number;
};
Returns: number; // ✅ Must match migration return type (NOT [])
};
// 3. Check lib/rateLimit.ts RPC calls use correct names
const { data, error } = await supabase.rpc('increment_rate_limit', {
p_rate_key: key, // ✅ Must match parameter name
p_window_seconds: windowSeconds,
});
// 4. Check API routes apply correct tier
export async function POST(request: Request) {
await checkRateLimit(request, 'api:generation:create'); // ✅ Correct tier
// ...
}
Rate Limit Mismatch Checklist:
- Migration parameter names match types/supabase.ts
- Migration return type matches types/supabase.ts
- RPC calls in lib/rateLimit.ts use correct parameter names
- API routes use appropriate tier (check lib/rateLimit.ts for tiers)
- No fallback to in-memory rate limiting in production logs
- Test actual rate limiting works (make rapid requests)
Common Rate Limit Mismatches:
Parameter Name Mismatch
// ❌ WRONG: Migration has p_rate_key, types has rate_key Args: { rate_key: string } // TypeScript p_rate_key text -- Migration // ✅ CORRECT: Names match Args: { p_rate_key: string } // TypeScript p_rate_key text -- MigrationReturn Type Mismatch
// ❌ WRONG: Function returns integer, types expects array Returns: number[]; // TypeScript expects array RETURNS integer -- Migration returns single value // ✅ CORRECT: Return types match Returns: number; // TypeScript RETURNS integer -- MigrationMissing Rate Limit Tier
// ❌ WRONG: Using generic tier await checkRateLimit(request, 'api:default'); // ✅ CORRECT: Using specific tier for operation await checkRateLimit(request, 'api:generation:create');
Step 2.3: Type System Checks
CRITICAL: Always verify type consistency:
// Check these locations for type mismatches:
1. /types/supabase.ts - Database function signatures
2. /lib/services/*.ts - Service method signatures
3. /app/api/*/route.ts - API request/response types
4. Supabase migrations - Enum values and function parameters
Common Type Mismatch Patterns:
Database Function Signatures
// ❌ WRONG: Old parameter names increment_rate_limit: { Args: { rate_key: string; window_seconds?: number; }; } // ✅ CORRECT: Match migration parameter names increment_rate_limit: { Args: { p_rate_key: string; p_window_seconds?: number; }; }Branded Type Comparisons
// ❌ WRONG: Comparing branded types directly const validIds = new Set(configs.map(c => c.id)); // Set<BrandedType> if (validIds.has(plainId)) // Always fails! // ✅ CORRECT: Convert to plain strings const validIds = new Set(configs.map(c => String(c.id))); if (validIds.has(plainId)) // Works!Function Return Types
// ❌ WRONG: Expecting array when function returns single value Returns: { count: number; reset_time: string; } []; // ✅ CORRECT: Match actual function return Returns: number;
Step 2.4: Queue Worker Setup & Validation (CRITICAL FOR ASYNC OPERATIONS)
If the feature uses async job processing (image/video generation, exports, etc.), ALWAYS verify the queue worker:
Queue Worker Requirements Checklist:
- Worker exists - Check
/lib/workers/generationQueueWorker.tsexists - Worker is running - Verify process with
ps aux | grep worker - Job types supported - Check worker switch statement handles all job types
- Environment variables - Worker needs
.env.localloaded - Database schema - Job types exist in database enum
Step-by-Step Queue Worker Validation:
Check if Worker Exists
ls -la lib/workers/generationQueueWorker.ts ls -la scripts/start-worker.tsVerify Job Type Support
// In /lib/workers/generationQueueWorker.ts, verify switch statement: switch (job.job_type) { case JobType.VIDEO_GENERATION: await this.processVideoGeneration(job); break; case JobType.IMAGE_GENERATION: case JobType.PARALLEL_IMAGE_GENERATION: // ✅ Add your new job type here! await this.processImageGeneration(job); break; // Add cases for ALL job types your feature creates }Verify Database Enum Includes Job Type
-- Check latest migration includes your job type: ALTER TYPE job_type ADD VALUE 'parallel-image-generation'; ALTER TYPE job_type ADD VALUE 'prompt-optimization';Verify TypeScript Enum Matches
// In /types/jobs.ts: export const JobType = { VIDEO_GENERATION: 'video-generation', IMAGE_GENERATION: 'image-generation', PARALLEL_IMAGE_GENERATION: 'parallel-image-generation', // ✅ Must match migration // ... } as const;Start the Worker
# Worker needs environment variables loaded: NODE_ENV=development npx dotenv-cli -e .env.local npx tsx scripts/start-worker.ts # Verify worker is running: ps aux | grep "start-worker"Monitor Worker Logs
# Watch for these events in the worker output: ✅ "worker.started" - Worker initialized successfully ✅ "worker.jobs_fetched" - Found pending jobs ✅ "worker.job_processing_started" - Processing job ✅ "worker.job_completed" - Job finished successfully # Watch for these ERROR patterns: ❌ "Unsupported job type" - Job type not in switch statement ❌ "Supabase configuration missing" - Environment variables not loaded ❌ "Failed to fetch pending jobs" - Database connection issueTest End-to-End Job Processing
# Create a test job via API: curl -X POST http://localhost:3000/api/your-feature/generate \ -H "Content-Type: application/json" \ -d '{"prompt": "test", ...}' # Check job status in database: curl https://YOUR_PROJECT.supabase.co/rest/v1/processing_jobs?select=*&order=created_at.desc&limit=1 \ -H "apikey: YOUR_KEY" # Watch for status changes: # pending → processing → completed ✅ # OR # pending → failed ❌ (check error_message field)
Common Queue Worker Issues:
| Issue | Symptom | Fix |
|---|---|---|
| Worker not running | Jobs stuck in "pending" forever | Start worker with npx tsx scripts/start-worker.ts |
| Missing environment variables | Worker crashes on startup | Use dotenv-cli -e .env.local to load env vars |
| Job type not in switch statement | Jobs fail with "Unsupported" | Add case JobType.YOUR_TYPE to worker switch |
| Database enum missing job type | Job creation fails with 42703 | Create migration to ALTER TYPE job_type ADD VALUE |
| TypeScript enum doesn't match migration | Type errors in API routes | Sync /types/jobs.ts with migration enum values |
| Worker processing wrong job types | Unrelated jobs get processed | Check worker jobTypes config filter |
| Jobs not being picked up | Worker polls but finds nothing | Verify job status is exactly "pending" (not "failed" or "archived") |
| Worker exits immediately after processing | No continuous polling | Check poll loop is running (not just processing once) |
CRITICAL VALIDATION: Before marking implementation complete, ALWAYS:
- Start the queue worker in background
- Create a test job via your feature
- Watch worker logs pick up and process the job
- Verify job status changes from "pending" → "processing" → "completed"
- Confirm job results are stored correctly (asset created, file uploaded, etc.)
If implementing a NEW async feature that needs queue processing:
- Create job type in database migration
- Add job type to TypeScript
/types/jobs.ts - Add handler case in worker switch statement
- Test worker picks up and processes jobs
- Document in feature README how to start worker
Step 2.5: Error Tracking & Sentry Validation
CRITICAL: All error paths must include proper trackError() calls with context for Sentry monitoring.
Purpose: Ensure all try-catch blocks, error boundaries, and failure paths include comprehensive error tracking for production debugging.
Validation Commands:
# 1. Find all try-catch blocks in feature code
grep -r "try\s*{" [feature-files] -A 10 | grep -B 5 "catch"
# 2. Verify trackError is called in catch blocks
grep -r "trackError" [feature-files]
# 3. Check for silent error swallowing (empty catch blocks)
grep -r "catch.*{.*}" [feature-files]
# 4. Verify error imports
grep -r "import.*trackError" [feature-files]
grep -r "import.*ErrorCategory" [feature-files]
grep -r "import.*ErrorSeverity" [feature-files]
# 5. Check for custom error classes
grep -r "extends Error" [feature-files]
Error Tracking Checklist:
- All try-catch blocks include
trackError()calls - Error tracking includes proper context:
-
userId- Who experienced the error -
projectId/assetId- What resource was affected - Component/function name - Where it occurred
- Action/operation - What was being attempted
- Input parameters - What data caused the failure
-
-
ErrorCategoryspecified (DATABASE, EXTERNAL_SERVICE, VALIDATION, etc.) -
ErrorSeverityspecified (LOW, MEDIUM, HIGH, CRITICAL) - Custom error classes extend base
Errorclass - User-friendly error messages (no stack traces or technical details to users)
- No silent error swallowing (empty catch blocks without logging)
- API errors return appropriate HTTP status codes (400, 404, 500, etc.)
- Stack traces captured in development mode
Example: Correct Error Tracking
// ✅ CORRECT - Full error tracking with context
import { trackError, ErrorCategory, ErrorSeverity } from '@/lib/errorTracking';
import { serverLogger } from '@/lib/serverLogger';
export async function POST(request: NextRequest) {
const log = createServerLogger();
const userId = 'user-123' as UserId;
try {
const result = await generateVideo(params);
await log.flush();
return Response.json({ success: true, result });
} catch (error) {
// Track error with full context
trackError(error as Error, {
category: ErrorCategory.EXTERNAL_SERVICE,
severity: ErrorSeverity.HIGH,
context: {
userId,
provider: 'wavespeed',
model: params.modelId,
operation: 'video_generation',
prompt: params.prompt,
processingTime: Date.now() - startTime,
},
});
log.error('Video generation failed', {
error,
userId,
provider: 'wavespeed',
});
await log.flush();
// Return user-friendly error
return Response.json({ error: 'Failed to generate video. Please try again.' }, { status: 500 });
}
}
Example: Common Mistakes
// ❌ WRONG - No error tracking
try {
await doSomething();
} catch (error) {
console.error(error); // Only logs to console, not tracked in Sentry!
return { error: 'Failed' };
}
// ❌ WRONG - Missing context
try {
await doSomething();
} catch (error) {
trackError(error as Error); // No category, severity, or context!
return { error: 'Failed' };
}
// ❌ WRONG - Silent error swallowing
try {
await doSomething();
} catch (error) {
// Empty catch block - error disappears!
}
// ❌ WRONG - Exposing technical details to user
try {
await doSomething();
} catch (error) {
return { error: error.message }; // May expose stack traces or internal details
}
Error Categories Reference:
export enum ErrorCategory {
VALIDATION = 'validation', // Invalid input data
DATABASE = 'database', // Supabase/database errors
EXTERNAL_SERVICE = 'external_service', // Third-party API failures
AUTHENTICATION = 'authentication', // Auth/permission errors
NETWORK = 'network', // Network/connectivity issues
UNKNOWN = 'unknown', // Uncategorized errors
}
export enum ErrorSeverity {
LOW = 'low', // Non-critical, degraded functionality
MEDIUM = 'medium', // Important feature broken
HIGH = 'high', // Critical feature broken
CRITICAL = 'critical', // Complete system failure
}
Validation Steps:
Read All Feature Files
# List all files in your feature find [feature-directory] -name "*.ts" -o -name "*.tsx"Check Each Try-Catch Block
For each catch block found:
- ✅ Contains
trackError()call - ✅ Includes
ErrorCategory - ✅ Includes
ErrorSeverity - ✅ Includes context object with relevant data
- ✅ Returns user-friendly error message
- ✅ Contains
Verify Custom Error Classes
// Custom errors should extend Error export class ValidationError extends Error { constructor( message: string, public field?: string ) { super(message); this.name = 'ValidationError'; } }Test Error Tracking in Production
# Check Axiom for tracked errors mcp__axiom__queryApl({ query: ` ['nonlinear-editor'] | where ['_time'] > ago(1h) | where ['severity'] == "error" | where ['message'] contains "your-feature-name" | project ['_time'], ['message'], ['context.category'], ['context.severity'], ['userId'] | order by ['_time'] desc ` }) # Check Sentry for issues (if integrated) npx sentry-cli issues list --status unresolved --limit 20
CRITICAL VALIDATION: Before marking implementation complete:
- Trigger an error in your feature (invalid input, network failure, etc.)
- Verify error appears in Axiom logs with full context
- Verify error includes userId, category, severity, and relevant metadata
- Confirm user sees friendly error message (not stack trace)
- Check Sentry dashboard for the tracked error (if integrated)
Common Error Tracking Mistakes:
| Mistake | Impact | Fix |
|---|---|---|
No trackError() in catch blocks |
Errors invisible in Sentry | Add trackError() to all catch blocks |
| Missing context data | Can't debug production errors | Include userId, resourceId, operation details |
| No error category/severity | Can't prioritize or filter errors | Always specify category and severity |
| Empty catch blocks | Silent failures | Always log and track errors |
| Exposing stack traces to users | Security risk, poor UX | Return generic user-friendly messages |
| Not flushing logs in API routes | Logs lost | Always await log.flush() before returning |
| Console.error only | No production tracking | Use trackError() for Sentry integration |
Step 2.6: Storage Best Practices - Migrate from localStorage to Supabase
CRITICAL: Minimize or eliminate localStorage usage. Use Supabase for persistent user data, especially images and files.
Purpose: Ensure refactored code follows best practices by storing data server-side in Supabase instead of client-side localStorage.
Why Avoid localStorage:
- ❌ Limited capacity: 5-10MB browser limit (images fill this quickly)
- ❌ Not synced: Data lost when user switches devices/browsers
- ❌ Security risks: Accessible via XSS attacks
- ❌ Performance: Large data slows down app initialization
- ❌ No backups: Data lost if user clears browser cache
- ❌ No server access: Can't process or query data server-side
✅ Use Supabase Instead:
- ✅ Unlimited storage: Store images, videos, files without size limits
- ✅ Cross-device sync: Data accessible from any device
- ✅ Secure: Row-level security (RLS) policies protect user data
- ✅ Backed up: Automatic backups and point-in-time recovery
- ✅ Queryable: Server-side queries, filtering, joins
- ✅ Shareable: Easy to share data between users
Validation Commands:
# 1. Find all localStorage usage in feature code
grep -r "localStorage" [feature-files]
# 2. Find sessionStorage usage (also should be minimized)
grep -r "sessionStorage" [feature-files]
# 3. Check for image/file storage in localStorage
grep -r "localStorage.*setItem.*data:image\|localStorage.*setItem.*blob" [feature-files]
# 4. Verify Supabase storage usage
grep -r "supabase.*storage\|uploadFile\|createAsset" [feature-files]
# 5. Check for proper asset records in database
grep -r "insert.*assets\|createAsset" [feature-files]
Migration Checklist:
- No localStorage for user data - User preferences, settings, content stored in Supabase
- No localStorage for images - All images uploaded to Supabase Storage
- No localStorage for files - All files (videos, audio, documents) in Supabase Storage
- localStorage only for ephemeral UI state (if absolutely necessary):
- Temporary form data (auto-save drafts)
- UI preferences (theme, sidebar collapsed)
- Non-sensitive session state
- Maximum 1KB per key
- Asset records created - Every file upload creates database record in
assetstable - RLS policies applied - Assets protected by Row Level Security
- Proper file paths - Assets stored in organized buckets with user-specific paths
Example: WRONG - localStorage for Images
// ❌ WRONG - Storing images in localStorage
async function saveImage(imageBlob: Blob) {
const reader = new FileReader();
reader.onload = () => {
const base64 = reader.result as string;
localStorage.setItem('userImage', base64); // BAD: Limited space, no sync, not secure
};
reader.readAsDataURL(imageBlob);
}
// ❌ WRONG - Retrieving from localStorage
function getImage(): string | null {
return localStorage.getItem('userImage'); // BAD: Data lost on cache clear
}
Example: CORRECT - Supabase Storage
// ✅ CORRECT - Upload to Supabase Storage
import { createClientComponentClient } from '@supabase/auth-helpers-nextjs';
import type { Database } from '@/types/supabase';
import type { AssetId, UserId } from '@/types/branded';
async function saveImage(
imageFile: File,
userId: UserId
): Promise<{ assetId: AssetId; url: string }> {
const supabase = createClientComponentClient<Database>();
// 1. Upload file to Supabase Storage
const filePath = `${userId}/images/${Date.now()}-${imageFile.name}`;
const { data: uploadData, error: uploadError } = await supabase.storage
.from('assets')
.upload(filePath, imageFile, {
cacheControl: '3600',
upsert: false,
});
if (uploadError) {
throw new Error(`Failed to upload image: ${uploadError.message}`);
}
// 2. Get public URL
const {
data: { publicUrl },
} = supabase.storage.from('assets').getPublicUrl(filePath);
// 3. Create asset record in database
const { data: asset, error: dbError } = await supabase
.from('assets')
.insert({
user_id: userId,
type: 'image',
url: publicUrl,
storage_path: filePath,
file_name: imageFile.name,
file_size: imageFile.size,
mime_type: imageFile.type,
})
.select()
.single();
if (dbError) {
throw new Error(`Failed to create asset record: ${dbError.message}`);
}
return {
assetId: asset.id as AssetId,
url: publicUrl,
};
}
// ✅ CORRECT - Retrieve from database
async function getUserImages(userId: UserId): Promise<Asset[]> {
const supabase = createClientComponentClient<Database>();
const { data: assets, error } = await supabase
.from('assets')
.select('*')
.eq('user_id', userId)
.eq('type', 'image')
.order('created_at', { ascending: false });
if (error) {
throw new Error(`Failed to fetch images: ${error.message}`);
}
return assets;
}
Acceptable localStorage Use Cases:
// ✅ ACCEPTABLE - Ephemeral UI state only
interface UIPreferences {
theme: 'light' | 'dark';
sidebarCollapsed: boolean;
lastViewedTab: string;
}
// Small, non-critical UI preferences
function saveUIPreference(key: keyof UIPreferences, value: string): void {
localStorage.setItem(`ui_${key}`, value);
}
// ✅ ACCEPTABLE - Temporary form draft (with Supabase backup)
async function saveDraft(formData: FormData): Promise<void> {
// 1. Save to localStorage for instant restore
localStorage.setItem('form_draft', JSON.stringify(formData));
// 2. Also save to Supabase for cross-device access
await supabase.from('drafts').upsert({
user_id: userId,
form_data: formData,
updated_at: new Date().toISOString(),
});
}
Migration Steps:
Identify localStorage Usage
# Find all localStorage calls grep -rn "localStorage\." [feature-files]Categorize Each Usage
For each localStorage usage, decide:
- Migrate to Supabase: User data, images, files, anything persistent
- Migrate to Supabase + localStorage cache: Frequently accessed data (store in Supabase, cache in localStorage)
- Keep as localStorage: Truly ephemeral UI state only
Create Supabase Migration (if new tables needed)
-- Example: Create user_preferences table CREATE TABLE user_preferences ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE, preferences JSONB NOT NULL DEFAULT '{}', created_at TIMESTAMPTZ DEFAULT NOW(), updated_at TIMESTAMPTZ DEFAULT NOW(), UNIQUE(user_id) ); -- Enable RLS ALTER TABLE user_preferences ENABLE ROW LEVEL SECURITY; -- RLS Policy: Users can only access their own preferences CREATE POLICY "Users can manage their own preferences" ON user_preferences FOR ALL USING (auth.uid() = user_id);Update Code to Use Supabase
Replace localStorage calls with Supabase queries (see examples above)
Verify Migration
# Check that localStorage usage is minimal grep -c "localStorage" [feature-files] # Should be 0 or only for ephemeral UI state # Verify Supabase storage is used grep -c "supabase.storage\|createAsset" [feature-files] # Should be > 0 for any file handling
Common Migration Patterns:
| Old Pattern (localStorage) | New Pattern (Supabase) |
|---|---|
localStorage.setItem('userImage', data) |
supabase.storage.from('assets').upload() + create asset record |
localStorage.getItem('userData') |
supabase.from('users').select('*').eq('id', userId) |
localStorage.setItem('settings', json) |
supabase.from('user_preferences').upsert({ preferences: json }) |
localStorage.setItem('draft', data) |
supabase.from('drafts').upsert() (with optional localStorage cache) |
localStorage.setItem('cache', data) |
supabase.from('cache').upsert() or use React Query / SWR |
CRITICAL VALIDATION: Before marking refactoring complete:
- Search for all
localStorageusage in refactored code - Verify each usage is either:
- Eliminated - Migrated to Supabase
- Justified - Truly ephemeral UI state with written justification
- Verify images/files are uploaded to Supabase Storage
- Verify asset records are created in database
- Test that data persists across browser sessions and devices
- Verify RLS policies protect user data
Common localStorage Anti-Patterns to Avoid:
| Anti-Pattern | Why It's Bad | Solution |
|---|---|---|
| Storing images as base64 | Huge size, slow performance | Upload to Supabase Storage |
| Storing user profiles | No cross-device sync | Store in profiles table |
| Storing generated content | Lost on cache clear | Store in assets table |
| Storing auth tokens | Security risk if XSS | Use httpOnly cookies (Supabase handles) |
| Storing application state | Not accessible server-side | Use database tables with proper schema |
| Using localStorage as cache | No invalidation, stale data | Use React Query or SWR with Supabase |
Phase 3: Systematic Debugging (When Issues Arise)
Step 3.1: Launch Diagnostic Agent Swarm
When bugs appear, launch 5 parallel diagnostic agents:
Agent 1: Frontend Payload Validation
- Read frontend code making API call
- Verify request payload structure
- Check all required fields
- Validate data types
Agent 2: API Route Investigation
- Read API route implementation
- Check authentication middleware
- Verify rate limiting
- Check validation logic
- Look for early returns or errors
Agent 3: Database Query Analysis
- Check Supabase queries
- Verify RLS policies
- Check enum constraints
- Look for migration issues
Agent 4: Type System Verification
- Compare types/supabase.ts with migrations
- Check for parameter name mismatches
- Verify return type consistency
- Look for branded type issues
Agent 5: Production Log Analysis
- Use mcp__axiom__queryApl() to query logs
- Look for recent errors
- Check for rate limit failures
- Identify error patterns
Step 3.2: Trace Data Flow
Follow the request through entire stack:
1. User Action (Frontend)
↓
2. API Call (fetch)
→ Check: Network tab, request payload
↓
3. Next.js API Route
→ Check: withAuth middleware, rate limiting
↓
4. Business Logic (Service Layer)
→ Check: Input validation, error handling
↓
5. Database Query (Supabase)
→ Check: RLS policies, enum constraints
↓
6. Job Creation/Response
→ Check: Return value, error responses
At each step, verify:
- ✅ Data reaches this point (logging)
- ✅ Data format is correct (types)
- ✅ No errors thrown (try-catch)
- ✅ Response returned successfully
Step 3.3: Check Production Logs FIRST
ALWAYS start debugging with Axiom logs:
// Use MCP tool to query Axiom
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(10m)
| where ['message'] contains "your-feature-name"
or ['message'] contains "error"
| project ['_time'], ['level'], ['message']
| order by ['_time'] desc
| limit 100
`,
});
Look for:
- Error messages
- Stack traces
- Failed assertions
- Rate limit fallbacks
- Database errors
Phase 4: Validation
Step 4.1: Build Verification
# Clean build
rm -rf .next
npm run build
# Check for:
- TypeScript errors
- ESLint warnings
- Unused imports
- Missing dependencies
Step 4.2: Runtime Testing
Create test checklist:
## Runtime Test Checklist
- [ ] Authentication works (user can access feature)
- [ ] Rate limiting allows requests (no PGRST202 errors)
- [ ] API endpoint responds with 200
- [ ] Database records created correctly
- [ ] Jobs appear in processing queue
- [ ] **Storage paths are user-scoped** (`{user_id}/{project_id}/...`)
- [ ] **Temp files cleaned up** (check `os.tmpdir()` after request)
- [ ] **Storage cleanup on failure** (no orphaned files if DB insert fails)
- [ ] **NO localStorage for persistent data** (check browser DevTools)
- [ ] Frontend updates with job status
- [ ] Error states handled gracefully
- [ ] Loading states show properly
- [ ] Success states trigger correctly
Step 4.2.1: External API Parity Audit (CRITICAL)
- Load the External API Parity Matrix and iterate over each provider/endpoint row
- Use
rgor IDE search to confirm the exactbase_url + endpoint_pathstring appears in the implementation (services, workers, API routes) - Verify HTTP method, auth header, and parameter names in code match the matrix; update the matrix first if official docs have changed since November 2025
- Perform a dry-run or mocked request (where safe) per endpoint to ensure current payloads align with the documented contract; capture responses in the validation log
- Note any intentional differences directly in the matrix and flag them for sign-off during Phase 4.4 validation
- Skip internal Next.js API routes here—this audit covers external integrations only
Step 4.3: Database Verification
# Check migrations applied
supabase migration list
# Verify enum values exist
supabase db remote sql "
SELECT enumlabel
FROM pg_enum
JOIN pg_type ON pg_enum.enumtypid = pg_type.oid
WHERE pg_type.typname = 'job_type'
"
# Check recent records
supabase db remote sql "
SELECT id, job_type, status, created_at
FROM processing_jobs
ORDER BY created_at DESC
LIMIT 10
"
Step 4.4: Launch Comprehensive Validation Agent (CRITICAL)
ALWAYS launch validation agent to ensure 100% feature parity and UI/UX match!
After implementation, launch a comprehensive validation agent with these specific requirements:
Task: Comprehensive Feature Parity and UI/UX Validation
MANDATORY VALIDATION CRITERIA:
1. Feature Parity Verification (100% FUNCTIONAL parity required)
- Read reference implementation for FUNCTIONALITY (what it does)
- Create checklist of EVERY feature, function, and capability
- Test each feature in implemented version
- Compare USER-FACING behavior side-by-side with reference
- Document ANY functional differences (even minor ones)
- Verify ALL features work identically FROM USER PERSPECTIVE
**IMPORTANT**: Check functional parity, NOT implementation parity:
- ✅ SAME: Models used (e.g., fal-ai/flux-dev)
- ✅ SAME: Model parameters (image_size, guidance_scale, etc.)
- ✅ SAME: Feature behavior, UI/UX, user flows
- ✅ DIFFERENT (expected): Queue vs direct API calls (repo uses queues)
- ✅ DIFFERENT (expected): Logging implementation (repo uses Axiom)
- ✅ DIFFERENT (expected): Auth implementation (repo uses withAuth)
- ✅ DIFFERENT (expected): Rate limiting (repo uses DB-backed)
- ✅ DIFFERENT (expected): Deployment (repo uses Vercel)
2. UI/UX Match Verification
- Compare layouts (grid, flexbox, spacing)
- Check component placement matches reference
- Verify color schemes and styling
- Test responsive behavior at all breakpoints
- Check animations and transitions
- Verify loading states match
- Test error states display identically
- Check success states trigger same way
- Verify modal/dialog behavior matches
- Test keyboard navigation works same way
3. Functional Completeness
- Test all user flows end-to-end
- Verify all buttons/actions work
- Test all form inputs and validation
- Check all API calls succeed
- Verify all database operations work
- Test all state management updates correctly
- Check all edge cases handled
4. UI/UX Improvement Validation
- IF any UI changes were made to improve UX:
* Document what was changed and WHY
* Verify improvement is objectively better
* Ensure no functionality was lost
* Get user approval for changes
- IF keeping original UI:
* Verify pixel-perfect match with reference
* Check spacing, colors, fonts identical
5. Error Handling Verification
- Test all error paths
- Verify error messages match or improve on reference
- Check graceful degradation works
- Test network failure scenarios
- Verify loading states prevent errors
6. Performance Verification
- Compare load times with reference
- Check for memory leaks
- Verify no performance regressions
- Test with large datasets
- Check for UI lag or stuttering
7. Accessibility Verification
- Test keyboard navigation
- Verify screen reader compatibility
- Check color contrast ratios
- Test focus management
- Verify ARIA labels
8. Security Review
- Verify input sanitization
- Check authentication/authorization
- Test RLS policies
- Verify rate limiting works
- Check for exposed secrets
9. Best Practices Adherence (CRITICAL - THIS REPO PATTERNS)
- ✅ **Infrastructure Check**: Verify ONLY Vercel + Supabase used (NO Docker, Railway, MongoDB, S3, etc.)
- ✅ **Queue Architecture**: Confirm uses `processing_jobs` table, NOT direct API calls
- ✅ **Storage**: Verify Supabase Storage with user-scoped paths `{user_id}/{project_id}/...`
- ✅ **URL Handling**: Confirm NO `supabase://` URLs passed to browser `<img>`, `<video>`, `<audio>` elements
- ✅ **URL Components**: Verify uses `SignedImage/SignedVideo/SignedAudio` or `useSignedStorageUrl` hook
- ✅ **Logging**: Confirm uses Axiom logger (`axiomLogger`), NOT console.log
- ✅ **Error Tracking**: Verify uses Sentry patterns from this repo
- ✅ **Auth**: Confirm uses `withAuth` middleware, NOT custom auth
- ✅ **Types**: Verify uses branded types (UserId, ProjectId, etc.)
- ✅ **Rate Limiting**: Confirm uses DB-backed rate limiting from this repo
- ✅ **Migrations**: Verify uses Supabase migrations, NOT raw SQL or other migration tools
- ✅ **RLS**: Confirm proper RLS policies on all tables
- ✅ **Testing**: Verify uses Jest + React Testing Library (this repo's patterns)
- ❌ **Forbidden**: NO localStorage for persistent data
- ❌ **Forbidden**: NO temp files without cleanup in `finally` blocks
- ❌ **Forbidden**: NO imports from `/reference/` folder
- ❌ **Forbidden**: NO console.log in production code (use logger)
- ❌ **Forbidden**: NO any types in TypeScript
- Read `/docs/CODING_BEST_PRACTICES.md` for full checklist
- Read `CLAUDE.md` for critical rules and patterns
- Report ANY violations with file:line references
DELIVERABLES:
1. Feature Parity Report:
- ✅ Features implemented and working
- ❌ Features missing or broken
- ⚠️ Features with different behavior
- 📊 Parity score: X/Y features (target: 100%)
2. UI/UX Match Report:
- ✅ UI elements matching reference
- ❌ UI elements with differences
- 💡 UI improvements made (with justification)
- 📸 Screenshots comparing reference vs implementation
3. Test Results:
- All user flows tested
- All edge cases covered
- All error scenarios handled
- Performance benchmarks
4. External API Parity Verification Log:
- Matrix row-by-row checklist showing each provider/endpoint verified
- Evidence of code search or dry-run request per endpoint (include timestamps/logs)
- Notes on any intentional deviations, signed off by the refactor lead
- Confirmation that doc_source/doc_published_at are still accurate or updated with the latest November 2025 references
5. Final Validation Status:
- PASS: 100% feature parity + UI match/improvement
- FAIL: Missing features or broken functionality
- NEEDS_WORK: Partial implementation requiring fixes
SUCCESS CRITERIA (ALL must be true):
- ✅ 100% of reference features implemented and working
- ✅ UI matches reference OR documented improvements approved
- ✅ All user flows work end-to-end
- ✅ No console errors or warnings
- ✅ All tests pass
- ✅ Performance meets or exceeds reference
- ✅ Accessibility standards met
- ✅ Security review passed
- ✅ **Best practices adherence: 100% compliance with this repo's patterns**
- ✅ **Infrastructure: ONLY Vercel + Supabase used (no other platforms)**
- ✅ **URL Handling: NO supabase:// URLs in browser elements**
- ✅ **Storage: User-scoped paths with proper cleanup**
- ✅ **Zero violations from `/docs/CODING_BEST_PRACTICES.md` or `CLAUDE.md`**
- ✅ **External API Parity Matrix fully validated and attached to the final report**
IF VALIDATION FAILS:
1. Document ALL failures in detail
2. Create prioritized fix list
3. Fix issues one by one
4. Re-run validation
5. Repeat until PASS
Step 4.4.5: Final Implementation Validation Agent (CRITICAL - MANDATORY)
ALWAYS validate final implementation against the reference document from Phase 1.1.5!
This is the MOST CRITICAL validation step. The agent compares the implemented feature against the comprehensive reference document to ensure 100% parity across 5 critical dimensions.
Launch Final Validation Agent:
Task: Final Implementation Validation Against Reference Document
OBJECTIVE: Compare implemented feature against reference document (Phase 1.1.5) to ensure 100% parity.
CRITICAL: Load docs/reports/REFACTORING_REFERENCE_[FEATURE_NAME].md (if missing, STOP and ERROR)
VALIDATE 5 CRITICAL DIMENSIONS:
1. MODELS/ENDPOINTS (P0 - Prevents Hallucination)
- Load the External API Parity Matrix created in Phase 1.1 (must be attached to the reference doc)
- Extract all models and external endpoints from the matrix and reference doc
- Search implementation: `grep -r "ideogram\|flux\|runway" app/ lib/` (add additional terms from the matrix)
- Compare EXACT strings and URLs against the matrix (e.g., `https://api.ideogram.ai/v3/generate`)
- Report ANY differences between implementation and matrix (string mismatch, missing auth header, outdated version)
- If official docs have changed since November 2025, update the matrix FIRST, then rerun validation
2. WORKFLOW (P0 - 100% Parity Required)
- Load workflow steps from reference doc
- Test EACH step in implementation
- Report deviations (timing, UI, behavior)
3. FEATURES (P0 - Complete Inventory)
- Load feature checklist from reference doc
- Check 100% of features exist and work
- Report: Implemented | Missing | Different Behavior
4. AXIOM LOGGING (P0 - All Required Points)
- Verify all required log points from reference doc
- Test logs appear in Axiom with all fields
- Report missing logs
5. STORAGE PATTERNS (P0 - Compliance Check)
- Verify user-scoped paths: {user_id}/{project_id}/...
- Check NO localStorage for persistent data
- Verify temp file cleanup in finally blocks
- Check storage cleanup on DB failures
GENERATE PASS/FAIL REPORT:
```markdown
# FINAL VALIDATION REPORT
## Status: PASS | FAIL
| Category | Score | Critical Issues |
|----------|-------|-----------------|
| Models/Endpoints & External APIs | X% | N issues |
| Workflow | X% | N deviations |
| Features | X% | N missing |
| Axiom Logging | X% | N missing |
| Storage | X% | N violations |
**OVERALL**: X/100 (PASS requires ≥95)
## Critical Issues
1. [Issue with exact location and fix]
2. [...]
## Required Fixes
[Prioritized fix list]
## External API Parity Summary
- Matrix version/date: YYYY-MM-DD
- Rows verified: N / N (list any skipped endpoints and why)
- Proof artifacts: [links to validation logs, dry-run outputs]
OUTPUT: docs/reports/FINALVALIDATION[FEATURE_NAME].md
STATUS MUST BE PASS (≥95/100) BEFORE PRODUCTION DEPLOYMENT!
**After Validation:**
- **FAIL (< 95)**: Fix ALL critical issues → Re-run validation → Repeat until PASS
- **PASS (≥ 95)**: Review minor warnings → Proceed to production
**Success Criteria:**
- [ ] Reference document loaded successfully
- [ ] Models/endpoints 100% exact match (no v3→v2 changes)
- [ ] Workflow 100% parity (all steps work identically)
- [ ] Features ≥95% parity (all P0/P1 features implemented)
- [ ] Axiom logging 100% complete (all required points)
- [ ] Storage patterns 100% compliant (no localStorage violations)
- [ ] Best practices followed (auth, validation, rate limiting)
- [ ] Overall score ≥95/100
- [ ] STATUS = PASS in final report
**Do NOT deploy to production until this validation PASSES!**
#### Step 4.5: Next.js Client/Server Boundary Validation (CRITICAL)
**ALWAYS check for client/server boundary violations before production deployment!**
This step prevents critical production bugs where server-only code is marked as client-side or vice versa.
**Why This Matters:**
- Build may succeed locally but fail in production
- Error only manifests when server-side code executes
- Common error: "Attempted to call [function] from the server but [function] is on the client"
- Validation agents often miss this because they focus on feature functionality, not Next.js boundaries
**Common Patterns That Cause Issues:**
1. **`'use client'` directive in files with server-only exports**
- Files exporting `createServerLogger()`, server actions, API route utilities
- Files with database utilities, server-side encryption, server-only configs
2. **Server-only functions imported into client components**
- Without proper dynamic imports
- Without conditional rendering
3. **Client-only functions called in API routes or server components**
- Browser APIs (localStorage, window, document) in server code
- Client-side state management in server code
**Automated Validation Checks:**
```bash
# Step 4.5.1: Check for 'use client' in server-only files
echo "=== Checking for 'use client' directive in server-only files ==="
# Check lib/logger.ts and related files
echo "Checking logger files..."
grep -n "use client" lib/logger.ts lib/logger/*.ts 2>/dev/null && \
echo "❌ ERROR: Found 'use client' in logger files!" || \
echo "✅ PASS: No 'use client' in logger files"
# Check for 'use client' in files exporting server-only functions
echo "Checking for 'use client' in files with createServer* exports..."
for file in $(grep -l "export.*createServer" lib/**/*.ts 2>/dev/null); do
if grep -q "use client" "$file"; then
echo "❌ ERROR: $file has 'use client' but exports server-only function"
fi
done
# Check API route utilities
echo "Checking API utilities..."
grep -rn "use client" lib/auth/*.ts lib/rateLimit.ts 2>/dev/null && \
echo "❌ ERROR: Found 'use client' in API utilities!" || \
echo "✅ PASS: No 'use client' in API utilities"
# Step 4.5.2: Check for server-only functions imported into client components
echo ""
echo "=== Checking for server-only imports in client components ==="
# Find all files with 'use client'
CLIENT_FILES=$(grep -rl "use client" app/ components/ 2>/dev/null)
# Check each client file for server-only imports
for file in $CLIENT_FILES; do
# Check for server-only logger imports
if grep -q "import.*createServerLogger" "$file"; then
echo "❌ ERROR: $file imports createServerLogger but is a client component"
fi
# Check for server-only crypto imports
if grep -q "import.*crypto.*from.*'crypto'" "$file"; then
echo "❌ ERROR: $file imports Node crypto but is a client component"
fi
# Check for server action imports without proper handling
if grep -q "import.*from.*'@/app/api/" "$file"; then
echo "⚠️ WARNING: $file imports from /app/api/ - verify this is intentional"
fi
done
# Step 4.5.3: Check API routes for client-only code
echo ""
echo "=== Checking API routes for client-only code ==="
# Find all API route files
API_ROUTES=$(find app/api -name "route.ts" -o -name "route.js" 2>/dev/null)
for route in $API_ROUTES; do
# Check for browser APIs
if grep -E "(window\.|document\.|localStorage|sessionStorage)" "$route" 2>/dev/null; then
echo "❌ ERROR: $route uses browser APIs (client-only)"
fi
# Check for 'use client' in API routes (should never happen)
if grep -q "use client" "$route" 2>/dev/null; then
echo "❌ CRITICAL: $route has 'use client' directive (API routes are always server-side)"
fi
done
# Step 4.5.4: Verify server utilities are not marked as client
echo ""
echo "=== Verifying server utilities ==="
# List of files that should NEVER have 'use client'
SERVER_ONLY_PATTERNS=(
"lib/services/"
"lib/database/"
"lib/jobs/"
"lib/email/"
"lib/encryption/"
)
for pattern in "${SERVER_ONLY_PATTERNS[@]}"; do
if [ -d "$pattern" ]; then
SERVER_FILES=$(find "$pattern" -name "*.ts" -o -name "*.tsx" 2>/dev/null)
for file in $SERVER_FILES; do
if grep -q "use client" "$file" 2>/dev/null; then
echo "❌ ERROR: $file should be server-only but has 'use client'"
fi
done
fi
done
echo ""
echo "✅ Client/Server boundary validation complete"
Manual Verification Checklist:
After running automated checks, manually verify:
Logger Usage
lib/logger.tshas NO'use client'directivecreateServerLogger()is only imported in server code (API routes, server components)createClientLogger()is only imported in client components- No mixed usage of server/client loggers
API Route Utilities
lib/auth/authUtils.tshas NO'use client'lib/rateLimit.tshas NO'use client'lib/errorHandler.tshas NO'use client'- These files are only imported in API routes
Service Layer
- All files in
lib/services/have NO'use client' - Services are only used server-side (API routes, server actions)
- All files in
Database Utilities
- All files in
lib/database/have NO'use client' - Supabase server client is only used server-side
- All files in
Client Components
- Components with
'use client'don't import server-only utilities - Browser APIs only used in client components
- State management (Zustand) only in client code
- Components with
Server Components
- Server components don't import client-only utilities
- No browser APIs in server components
- Database queries happen in server components or API routes
Common Violations to Watch For:
// ❌ BAD: 'use client' in file with server-only exports
'use client';
export function createServerLogger() {
// This will fail in production!
}
// ✅ GOOD: No 'use client', function is server-only
export function createServerLogger() {
// Works correctly
}
// ❌ BAD: Client component importing server-only function
('use client');
import { createServerLogger } from '@/lib/logger';
// ✅ GOOD: Client component uses client logger
('use client');
import { createClientLogger } from '@/lib/logger';
// ❌ BAD: API route using browser APIs
export async function POST(request: Request) {
const data = localStorage.getItem('key'); // Fails!
}
// ✅ GOOD: API route uses server-only APIs
export async function POST(request: Request) {
const data = await getFromDatabase();
}
If Violations Found:
Identify the violation type
- Server-only code marked as client?
- Client-only code in server context?
- Mixed imports?
Determine correct fix
- Remove incorrect
'use client'directive - Move server-only code to separate file
- Use dynamic imports for client components
- Split mixed files into client/server versions
- Remove incorrect
Verify fix
- Re-run automated checks
- Clear
.nextcache:rm -rf .next - Rebuild:
npm run build - Test in development
- Deploy to preview environment
- Test in production
Document the fix
- Note what was wrong and why
- Add to ISSUES_RESOLVED.md
- Update this skill with new patterns if needed
Lesson Learned (October 2025):
During a refactor, lib/logger.ts had 'use client' directive added accidentally. This caused all API routes using createServerLogger() to fail in production with:
Error: Attempted to call createServerLogger() from the server but createServerLogger is on the client.
Root cause: The file had 'use client' at the top, making ALL exports client-side only.
Impact: Build succeeded, no TypeScript errors, but production API routes completely failed.
Prevention: This Step 4.5 now catches these issues before deployment.
Success Criteria:
- ✅ All automated checks pass
- ✅ No
'use client'in server-only files - ✅ No server-only imports in client components
- ✅ No client-only code in API routes
- ✅ Manual checklist 100% complete
- ✅ Build succeeds with no warnings
- ✅ Preview deployment tested successfully
ONLY proceed to Phase 5 when ALL checks pass.
Phase 5: API Documentation Verification with Firecrawl (CRITICAL)
ALWAYS verify API connections before production testing!
This phase uses Firecrawl to scrape actual API documentation and verify that your implementation matches the latest specs, including exact model names.
Step 5.0: Launch Firecrawl Verification Agent Swarm
Launch parallel agents to verify ALL external API integrations:
Agent 1: FAL AI API Verification
- Scrape https://fal.ai/models docs for each model used
- Verify endpoints, parameters, response formats
- Check for deprecated models or parameters
- Validate authentication method
Agent 2: Replicate API Verification
- Scrape https://replicate.com/docs for each model
- Verify model IDs, input parameters, output formats
- Check webhook configuration
- Validate API key usage
Agent 3: OpenAI API Verification (if used)
- Scrape https://platform.openai.com/docs/api-reference
- Verify endpoints for chat, embeddings, etc.
- Check parameter formats and limits
- Validate authentication headers
Agent 4: Supabase API Verification
- Scrape https://supabase.com/docs/reference/javascript
- Verify RPC call signatures
- Check storage API methods
- Validate auth methods
Agent 5: Any Other External APIs
- Stripe, Axiom, Vercel, etc.
- Verify endpoints and parameters
- Check authentication methods
- Validate webhook signatures
Step 5.0.1: Firecrawl Verification Workflow
For EACH external API used in your feature:
Step 1: Scrape Latest API Documentation
// Use Firecrawl MCP to get latest docs
await mcp__firecrawl__firecrawl_scrape({
url: 'https://fal.ai/models/fal-ai/flux/dev',
formats: ['markdown'],
onlyMainContent: true,
});
// Extract key information:
// - Endpoint URL
// - Required parameters
// - Optional parameters
// - Response format
// - Authentication method
// - Rate limits
// - Deprecated features
Step 2: Compare with Your Implementation
// Read your API service implementation
const serviceCode = await Read({
file_path: 'lib/services/falService.ts',
});
// Check for mismatches:
// ❌ WRONG endpoint URL
// ❌ Missing required parameters
// ❌ Incorrect parameter types
// ❌ Wrong authentication header
// ❌ Using deprecated endpoints
// ❌ Incorrect response parsing
Step 3: Verify Endpoint Accuracy
# Check all API calls in codebase
grep -r "fal.run\|fal.queue" lib/ app/
# Verify model IDs match documentation
grep -r "fal-ai/flux/dev" lib/ app/
# Check parameter names
grep -A 10 "fal.run.*flux" lib/
Step 4: Validate Parameter Completeness
// Create parameter checklist from docs
const docsParameters = {
// From scraped docs
required: ['prompt', 'image_size'],
optional: ['num_inference_steps', 'guidance_scale', 'seed'],
};
// Compare with implementation
const implParameters = {
// From your code
provided: ['prompt', 'image_size', 'num_inference_steps'],
};
// ❌ Missing optional parameters that could improve results
// ✅ All required parameters present
Step 5.0.2: API Verification Checklist
For EACH API integration, verify:
FAL AI:
- Model ID matches docs (e.g.,
fal-ai/flux/dev, notfal/flux-dev) - All required parameters provided
- Parameter types correct (
numbervsstring) - Response parsing handles all possible formats
- Error handling for API failures
- Authentication uses correct header (
Authorization: Key ${FAL_KEY}) - Webhook configuration correct (if using async)
Replicate:
- Model version ID current (not deprecated)
- Input format matches schema
- Output format parsed correctly
- Prediction polling implemented
- Webhook handling correct
- Authentication uses correct header (
Authorization: Token ${REPLICATE_TOKEN})
OpenAI:
- Endpoint URL correct (
https://api.openai.com/v1/...) - Model name valid (
gpt-4, notgpt4) - Message format correct (role, content)
- Token limits respected
- Streaming handled correctly (if used)
- Error codes handled properly
Supabase:
- RPC function names match database
- Parameter names match function signature (e.g.,
p_rate_key) - Return type parsed correctly
- Auth headers included
- RLS policies considered
General API Checks:
- Base URLs correct (not hardcoded staging URLs)
- HTTPS used (not HTTP)
- API keys from environment variables (not hardcoded)
- Timeout handling implemented
- Retry logic for transient failures
- Rate limiting respected
Step 5.0.3: Common API Mismatches to Check
1. Model ID Format Mismatches
// Use Firecrawl to check current model ID format
await mcp__firecrawl__firecrawl_search({
query: 'fal ai flux dev model id format',
limit: 3,
});
// ❌ WRONG: Old or incorrect format
const result = await fal.run('flux/dev', { ... });
// ✅ CORRECT: Format from docs
const result = await fal.run('fal-ai/flux/dev', { ... });
2. Parameter Name Mismatches
// Scrape parameter docs
await mcp__firecrawl__firecrawl_scrape({
url: 'https://fal.ai/models/fal-ai/flux/dev/api',
formats: ['markdown'],
});
// ❌ WRONG: Old parameter name
{ prompt: '...', imageSize: '1024x1024' } // camelCase
// ✅ CORRECT: Parameter name from docs
{ prompt: '...', image_size: '1024x1024' } // snake_case
3. Response Format Mismatches
// ❌ WRONG: Assuming single image
const imageUrl = result.image.url;
// ✅ CORRECT: Handle array response
const imageUrl = result.images[0].url;
4. Authentication Header Mismatches
// ❌ WRONG: Incorrect header format
headers: { 'X-API-Key': process.env.FAL_KEY }
// ✅ CORRECT: Format from docs
headers: { 'Authorization': `Key ${process.env.FAL_KEY}` }
5. Endpoint URL Mismatches
// ❌ WRONG: Outdated or incorrect URL
const response = await fetch('https://api.fal.run/v1/models/flux-dev');
// ✅ CORRECT: Current URL from docs
const response = await fetch('https://queue.fal.run/fal-ai/flux/dev');
Step 5.0.4: Firecrawl API Verification Script Pattern
Use this workflow to verify ALL APIs:
// List of all APIs used in the project
const apiServices = [
{
name: 'FAL AI',
docsUrl: 'https://fal.ai/models',
implementation: 'lib/services/falService.ts',
},
{
name: 'Replicate',
docsUrl: 'https://replicate.com/docs',
implementation: 'lib/services/replicateService.ts',
},
{
name: 'Supabase',
docsUrl: 'https://supabase.com/docs/reference/javascript',
implementation: 'lib/supabase/client.ts',
},
];
for (const api of apiServices) {
console.log(`\n🔍 Verifying ${api.name}...`);
// 1. Scrape latest docs
const docs = await mcp__firecrawl__firecrawl_scrape({
url: api.docsUrl,
formats: ['markdown'],
});
// 2. Read implementation
const implementation = await Read({
file_path: api.implementation,
});
// 3. Extract endpoints from implementation
const endpoints = extractEndpoints(implementation);
// 4. For each endpoint, verify against docs
for (const endpoint of endpoints) {
console.log(` 📍 Checking endpoint: ${endpoint.url}`);
// Search docs for this specific endpoint
const endpointDocs = await mcp__firecrawl__firecrawl_search({
query: `${api.name} ${endpoint.name} API parameters`,
limit: 5,
scrapeOptions: { formats: ['markdown'] },
});
// Compare parameters
const docParams = extractParameters(endpointDocs);
const implParams = extractParameters(endpoint.code);
// Report mismatches
const missingParams = docParams.required.filter((p) => !implParams.includes(p));
if (missingParams.length > 0) {
console.log(` ❌ Missing required parameters: ${missingParams.join(', ')}`);
} else {
console.log(` ✅ All required parameters present`);
}
}
}
Step 5.0.5: Launch Firecrawl Verification Agent
Create a dedicated agent to run comprehensive API verification:
Task: Verify ALL external API integrations with Firecrawl
For EACH API used in the feature:
1. Use Firecrawl to scrape latest API documentation
2. Extract: endpoints, parameters, auth methods, response formats
3. Read implementation code for that API
4. Compare implementation with documentation
5. Report mismatches:
- Wrong endpoints
- Missing parameters
- Incorrect parameter types
- Wrong authentication headers
- Deprecated features in use
- Response format mismatches
APIs to verify:
- FAL AI (lib/services/falService.ts or similar)
- Replicate (lib/services/replicateService.ts or similar)
- OpenAI (lib/services/openaiService.ts or similar)
- Supabase (lib/supabase/client.ts)
- Any other external APIs used
Create detailed report with:
- ✅ Correct implementations
- ❌ Mismatches found
- 🔧 Required fixes
- 📚 Documentation links
Priority: Fix all mismatches before proceeding to production testing.
Step 5.0.6: API Verification Success Criteria
ALL of these must pass before production testing:
- ✅ All model IDs match latest documentation
- ✅ All required parameters present
- ✅ Parameter names and types correct
- ✅ Authentication headers use correct format
- ✅ Endpoint URLs are current (not deprecated)
- ✅ Response parsing handles all documented formats
- ✅ Error handling covers all documented error codes
- ✅ Rate limits implemented where documented
- ✅ Webhooks configured correctly (if async APIs)
- ✅ No hardcoded credentials or staging URLs
If ANY API verification fails:
- Stop and fix the mismatch
- Build and test locally
- Re-run Firecrawl verification
- Only proceed when all APIs verified ✅
Step 5.0.7: Model Name Validation (CRITICAL)
ALWAYS verify that model names in code match EXACTLY with official API documentation!
Model name mismatches are a common source of bugs:
- API calls fail with "model not found" errors
- Using outdated/deprecated model versions
- Wrong pricing tier
- Missing features from newer models
Model Validation Workflow:
1. Inventory All Models in Codebase
Search for model references:
# Search for model configurations
grep -r "model.*:" lib/constants/ lib/services/ lib/workers/
grep -r "modelId\|model_id" lib/ app/
# Search for specific model patterns
grep -ri "gpt-\|dall\|claude\|gemini\|imagen\|ideogram\|flux\|veo\|minimax" lib/ app/ | grep -v node_modules
2. For Each Model Found, Verify Against Official Docs
Use Firecrawl to get latest model names:
// OpenAI Models
await mcp__firecrawl__firecrawl_search({
query: 'OpenAI image generation models 2025',
limit: 3,
scrapeOptions: { formats: ['markdown'] },
});
// Extract: gpt-image-1 (current), NOT dalle-3 (deprecated)
// Ideogram Models
await mcp__firecrawl__firecrawl_search({
query: 'Ideogram API v3 models 2025',
limit: 3,
scrapeOptions: { formats: ['markdown'] },
});
// Extract: ideogram-v3, ideogram-v3-turbo
// Google Imagen
await mcp__firecrawl__firecrawl_search({
query: 'Google Imagen 3 model ID 2025',
limit: 3,
scrapeOptions: { formats: ['markdown'] },
});
// Extract: imagen-3.0-generate-001 or latest
// Google Veo (Video)
await mcp__firecrawl__firecrawl_search({
query: 'Google Veo API model name 2025',
limit: 3,
scrapeOptions: { formats: ['markdown'] },
});
// FAL AI Models
await mcp__firecrawl__firecrawl_scrape({
url: 'https://fal.ai/models',
formats: ['markdown'],
});
// Extract exact model IDs like: fal-ai/flux/dev, fal-ai/minimax-video
3. Create Model Validation Matrix
Document all models:
| Feature | Current Model (Code) | Official Model (Docs) | Match? | Action |
|---|---|---|---|---|
| Image Gen (OpenAI) | dalle-3 ❌ | gpt-image-1 ✅ | ❌ | Replace |
| Image Gen (Ideogram) | ideogram-v2 ❌ | ideogram-v3 ✅ | ❌ | Upgrade |
| Video Gen (Google) | veo-2 ❌ | veo-3 ✅ | ❌ | Upgrade |
| ... | ... | ... | ... | ... |
4. Common Model Mismatches to Check
OpenAI Images:
// ❌ WRONG (deprecated)
model: 'dall-e-3';
model: 'dalle-3';
// ✅ CORRECT (current)
model: 'gpt-image-1';
Ideogram:
// ❌ WRONG (old version)
model: 'ideogram-v2';
model: 'ideogram';
// ✅ CORRECT (latest)
model: 'ideogram-v3';
model: 'ideogram-v3-turbo'; // Faster variant
Google Imagen:
// ❌ WRONG (old version)
model: 'imagen-2.0';
model: 'imagegeneration@002';
// ✅ CORRECT (current)
model: 'imagen-3.0-generate-001';
Google Veo:
// ❌ WRONG (generic name)
model: 'veo';
model: 'google-veo';
// ✅ CORRECT (specific version - verify from docs)
model: 'veo-3';
model: 'veo-3.1';
FAL AI Models:
// ❌ WRONG (incorrect format)
model: 'flux-dev';
model: 'minimax';
// ✅ CORRECT (full path format)
model: 'fal-ai/flux/dev';
model: 'fal-ai/minimax-video';
model: 'fal-ai/seedance';
5. Verify Model Version Numbers
Always use the LATEST stable version unless there's a specific reason not to:
- ❌ Don't use: v1, v2 when v3 is available
- ✅ Do use: Latest stable version
- ⚠️ Check: Release notes for breaking changes
6. Fix All Model Name Mismatches
For each mismatch:
// Read the file
const file = await Read({ file_path: 'lib/constants/models.ts' });
// Replace incorrect model names
await Edit({
file_path: 'lib/constants/models.ts',
old_string: `model: "dalle-3"`,
new_string: `model: "gpt-image-1"`,
});
// Verify change compiles
await Bash({ command: 'npm run build' });
7. Validate Model-Provider Mapping
Ensure models map to correct providers:
// Verify this mapping is correct
const MODEL_TO_PROVIDER = {
'gpt-image-1': 'openai', // ✅
'ideogram-v3': 'ideogram', // ✅
'imagen-3.0-generate-001': 'google', // ✅
'fal-ai/flux/dev': 'fal', // ✅
// ❌ These would be WRONG
'dalle-3': 'openai', // ❌ Wrong model name
ideogram: 'ideogram', // ❌ Missing version
};
8. Update Documentation
After fixing models, update:
- Constants file with comments showing official docs URL
- README with current models supported
- Migration guide if changing models
Step 5.0.8: Model Validation Checklist
Before proceeding to Phase 6, verify:
- All models inventoried from codebase
- Every model verified against official 2025 docs
- Model names use EXACT strings from API docs
- Using latest stable versions (v3 not v2, etc.)
- No deprecated models (dalle-3 → gpt-image-1)
- Model-to-provider mappings correct
- All fixes compile successfully
- Test API calls work with new model names
- Documentation updated with current models
Common Failures:
If model validation fails, common causes:
- Using old documentation (not 2025)
- Copy-pasting from outdated examples
- Using community nicknames instead of official IDs
- Missing version numbers
- Wrong provider prefix
Fix: Always use Firecrawl to get latest official docs, never assume model names!
Phase 6: Production Testing & Recursive Validation (CRITICAL)
NEVER skip this phase! This is where you catch issues that only appear in production.
Step 6.1: Deploy to Production
# Commit and push changes
git add -A
git commit -m "Implement feature: [name]"
git push
# Wait for Vercel deployment
vercel ls
# Get production URL
vercel inspect [deployment-url]
Step 6.2: Test with Chrome DevTools MCP
Use Chrome DevTools MCP to interact with production deployment:
// 1. Navigate to production site
await mcp__chrome_devtools__navigate_page({
url: 'https://your-production-url.vercel.app',
});
// 2. Take snapshot to see page state
await mcp__chrome_devtools__take_snapshot();
// 3. Login (if required)
await mcp__chrome_devtools__fill({
uid: '[email-input-uid]',
value: 'david@dreamreal.ai',
});
await mcp__chrome_devtools__fill({
uid: '[password-input-uid]',
value: 'sc3p4sses',
});
await mcp__chrome_devtools__click({ uid: '[login-button-uid]' });
// 4. Navigate to feature
await mcp__chrome_devtools__navigate_page({
url: 'https://your-production-url.vercel.app/feature-path',
});
// 5. Test the feature
await mcp__chrome_devtools__click({ uid: '[trigger-button-uid]' });
// 6. Check console for errors
const consoleMessages = await mcp__chrome_devtools__list_console_messages({
types: ['error', 'warn'],
});
// 7. Check network requests
const networkRequests = await mcp__chrome_devtools__list_network_requests({
resourceTypes: ['xhr', 'fetch'],
});
// 8. Verify API responses
const apiRequest = await mcp__chrome_devtools__get_network_request({
reqid: [request - id],
});
Step 6.3: Check Axiom for Production Errors
Query Axiom immediately after testing:
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(5m)
| where ['severity'] == "error"
or ['level'] == "error"
or ['status'] >= 400
| project ['_time'], ['level'], ['message'], ['stack'], ['userId'], ['url']
| order by ['_time'] desc
| limit 100
`,
});
Look for:
- API errors (500, 400, 404)
- Rate limit fallbacks
- Database errors (PGRST*)
- Unhandled exceptions
- Failed RPC calls
- Type errors
Step 6.4: Recursive Validation Loop (UNTIL CLEAN)
CRITICAL: Keep testing and fixing until Axiom shows NO errors.
// Recursive Validation Pattern:
ITERATION = 1;
while (AXIOM_HAS_ERRORS) {
console.log(`🔄 Validation Iteration ${ITERATION}`);
// 1. Query Axiom for recent errors
const errors = await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(10m)
| where ['severity'] == "error"
| summarize count() by ['message'], ['url']
| order by ['count_'] desc
`,
});
if (errors.length === 0) {
console.log('✅ No errors found in Axiom - validation complete!');
break;
}
// 2. Analyze each unique error
for (const error of errors) {
console.log(`🐛 Found error: ${error.message} (${error.count} occurrences)`);
// 3. Launch diagnostic agent to fix
await Task({
subagent_type: 'general-purpose',
description: 'Fix production error',
prompt: `
Fix this production error found in Axiom:
Error: ${error.message}
URL: ${error.url}
Occurrences: ${error.count}
Steps:
1. Read the code causing this error
2. Identify root cause
3. Implement fix
4. Build and verify TypeScript passes
5. Commit and push
6. Wait for deployment
7. Report fix applied
`,
});
}
// 4. Wait for fixes to deploy
console.log('⏳ Waiting 2 minutes for deployment...');
await new Promise((resolve) => setTimeout(resolve, 120000));
// 5. Re-test in production with Chrome DevTools
await mcp__chrome_devtools__navigate_page({
url: 'https://your-production-url.vercel.app/feature-path',
});
await mcp__chrome_devtools__click({ uid: '[trigger-button-uid]' });
ITERATION++;
if (ITERATION > 10) {
console.log('⚠️ Max iterations reached - manual intervention required');
break;
}
}
console.log(`✅ Validation complete after ${ITERATION} iterations`);
Step 6.5: Final Production Verification
After recursive loop completes, run comprehensive checks:
// 1. Check Axiom for last 30 minutes - should be CLEAN
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(30m)
| where ['severity'] == "error"
| summarize
total_errors = count(),
unique_errors = dcount(['message']),
affected_users = dcount(['userId'])
`,
});
// Expected: total_errors = 0
// 2. Check all feature endpoints return 200
const endpoints = ['/api/feature/create', '/api/feature/status', '/api/feature/list'];
for (const endpoint of endpoints) {
const requests = await mcp__chrome_devtools__list_network_requests({
resourceTypes: ['xhr', 'fetch'],
});
// Verify all responses are 2xx
}
// 3. Check no console errors in browser
const consoleErrors = await mcp__chrome_devtools__list_console_messages({
types: ['error'],
});
// Expected: consoleErrors.length = 0
// 4. Check rate limiting is database-backed (not in-memory fallback)
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(10m)
| where ['message'] contains "rate limit"
and ['message'] contains "fallback"
| count
`,
});
// Expected: count = 0 (no fallback to in-memory)
// 5. Check database records created correctly
await Bash({
command: `
supabase db remote sql "
SELECT count(*) as recent_jobs
FROM processing_jobs
WHERE created_at > NOW() - INTERVAL '10 minutes'
"
`,
});
// Expected: recent_jobs > 0
Step 6.6: Success Criteria for Production Validation
ALL of these must be true before proceeding:
- ✅ Axiom shows 0 errors in last 30 minutes
- ✅ All API endpoints return 2xx status codes
- ✅ No browser console errors
- ✅ No rate limit fallbacks to in-memory
- ✅ Database records created successfully
- ✅ Chrome DevTools shows feature working correctly
- ✅ User flow completes without errors
- ✅ Loading states display properly
- ✅ Success states trigger correctly
- ✅ Error handling works gracefully
If ANY criteria fails, continue recursive validation loop.
Phase 7: Documentation & Commit
Step 7.1: Update Documentation
Update these files:
ISSUES.md- Mark issues as resolveddocs/reports/ISSUES_RESOLVED.md- Add detailed fix documentation- API documentation - Document new endpoints
- Component documentation - Update JSDoc comments
Step 7.2: Commit with Detailed Message
git add -A && git commit -m "
[Type]: [Brief description]
[Detailed description of changes]
Changes:
- File 1: Change description
- File 2: Change description
- ...
Fixes: #123
Resolves issue with [specific problem]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
"
git push
Common Pitfalls & Solutions
Pitfall 1: Rate Limiting Always Fails (MOST COMMON)
Symptom: All routes fall back to in-memory rate limiting, Axiom shows "rate limit fallback"
Root Cause: Database function signature changed but types not updated
Detection:
// Check Axiom for fallback messages
await mcp__axiom__queryApl({
query: `
['nonlinear-editor']
| where ['_time'] > ago(1h)
| where ['message'] contains "rate limit" and ['message'] contains "fallback"
| count
`,
});
// If count > 0, you have rate limit mismatches
Solution:
- Read latest migration:
supabase/migrations/*rate_limit*.sql - Check parameter names (e.g.,
p_rate_keyvsrate_key) - Check return type (
RETURNS integervsRETURNS TABLE) - Update
types/supabase.tsto match EXACTLY - Update
lib/rateLimit.tsRPC calls with correct param names - Clear cache:
rm -rf .next - Build and verify:
npm run build - Deploy and verify in Axiom (no fallback messages)
Full Verification:
# 1. Check migration signature
grep -A 5 "CREATE OR REPLACE FUNCTION increment_rate_limit" supabase/migrations/*.sql
# 2. Check types match
grep -A 10 "increment_rate_limit" types/supabase.ts
# 3. Check RPC calls match
grep -A 5 "supabase.rpc('increment_rate_limit'" lib/rateLimit.ts
# 4. Deploy and verify
git push
# Wait for deployment
# Check Axiom for fallback messages (should be 0)
Pitfall 2: TypeScript Types Don't Match Database
Symptom: API calls fail with PGRST202 or "function not found" errors
Root Cause: types/supabase.ts has outdated function signatures
Solution:
- Read the latest migration file
- Update types/supabase.ts to match EXACTLY
- Clear .next cache:
rm -rf .next - Restart dev server
Pitfall 3: Branded Types Break Equality Checks
Symptom: All items marked as "invalid" despite being valid
Root Cause: Comparing Set<BrandedType> with plain string
Solution:
// Always convert branded types to strings for comparison
const validIds = new Set(items.map((item) => String(item.id)));
Pitfall 4: Frontend Makes Request But Nothing Happens
Symptom: No network request visible in DevTools
Root Cause: Missing import, early return in hook, or state issue
Solution:
- Check component imports all required hooks
- Verify hook actually makes fetch() call
- Check for conditional logic preventing execution
- Add console.log to trace execution path
Pitfall 5: Database Rejects Insert with Enum Error
Symptom: "invalid input value for enum" error
Root Cause: TypeScript enum includes value not in database
Solution:
- Create migration to add enum value
- Push migration:
supabase db push - Verify:
supabase migration list
Pitfall 6: Duplicate Routes Created
Symptom: Multiple routes doing the same thing, conflicting implementations
Root Cause: Didn't check for existing routes before creating new ones
Detection:
# Find duplicate route patterns
find app/api -name "route.ts" | grep -i "generate"
find app/api -name "route.ts" | grep -i "frame"
# Check for similar functionality
grep -r "fal.run" app/api/
grep -r "processing_jobs" app/api/
Solution:
- Search for existing routes with similar names/functionality
- Review existing implementation
- Decide: extend existing route OR deprecate and replace
- If replacing, create migration plan for frontend
- Document which routes are deprecated
- Update API documentation
Pitfall 7: AI Generation Bypasses Queue
Symptom: Vercel timeouts, slow responses, no job tracking
Root Cause: AI API called directly in route handler instead of using queue
Detection:
// Check for direct AI calls in routes
grep -r "fal.run\|replicate.run\|fetch.*api.openai" app/api/
// Check if jobs being created
supabase db remote sql "
SELECT COUNT(*) as recent_jobs
FROM processing_jobs
WHERE created_at > NOW() - INTERVAL '1 hour'
"
// If count is 0 but feature is being used, queue bypassed!
Solution:
- Move AI API call from route to worker
- Create job in route handler
- Implement worker to process job
- Update frontend to poll job status
- Add job type to database enum if needed
Full Migration:
// STEP 1: Add job type to enum
// supabase/migrations/YYYYMMDD_add_job_type.sql
ALTER TYPE job_type ADD VALUE IF NOT EXISTS 'your_ai_feature';
// STEP 2: Update route to create job
export async function POST(request: Request) {
const userId = await getUserId(request);
const body = await request.json();
const { data: job } = await supabase
.from('processing_jobs')
.insert({
user_id: userId,
job_type: 'your_ai_feature',
status: 'pending',
input_data: body,
})
.select()
.single();
return NextResponse.json({ jobId: job.id });
}
// STEP 3: Create worker
// lib/workers/yourAiFeatureWorker.ts
export async function processYourAiFeature(job: ProcessingJob) {
await updateJobStatus(job.id, 'processing');
const result = await aiService.generate(job.input_data);
await updateJobStatus(job.id, 'completed', {
output_data: result,
});
}
// STEP 4: Update frontend to poll
const checkJobStatus = async (jobId: string) => {
const { data: job } = await supabase
.from('processing_jobs')
.select('*')
.eq('id', jobId)
.single();
return job;
};
Pitfall 8: API Parameter Mismatches (VERY COMMON)
Symptom: API calls fail with 400/422 errors, or return unexpected results, or use deprecated endpoints
Root Cause: Implementation doesn't match latest API documentation
Common Mismatches:
- Wrong model IDs (e.g.,
flux/devinstead offal-ai/flux/dev) - Wrong parameter names (e.g.,
imageSizeinstead ofimage_size) - Missing required parameters
- Wrong authentication header format
- Using deprecated endpoints
- Incorrect response parsing
Detection:
# Check Axiom for API errors
mcp__axiom__queryApl("['nonlinear-editor'] | where ['_time'] > ago(1h) | where ['message'] contains 'API' or ['message'] contains '400' or ['message'] contains '422'")
# Check for API calls in code
grep -r "fal.run\|replicate.run\|openai" lib/services/
# Check auth headers
grep -r "Authorization.*Key\|X-API-Key" lib/services/
Fix with Firecrawl Verification (Phase 5):
// STEP 1: Scrape latest API docs
const falDocs = await mcp__firecrawl__firecrawl_scrape({
url: 'https://fal.ai/models/fal-ai/flux/dev',
formats: ['markdown'],
onlyMainContent: true,
});
// STEP 2: Extract correct parameters from docs
// Look for: required parameters, optional parameters, response format
// STEP 3: Read your implementation
const serviceCode = await Read({
file_path: 'lib/services/falService.ts',
});
// STEP 4: Compare and fix mismatches
// ❌ BEFORE (wrong):
const result = await fal.run('flux/dev', {
prompt: text,
imageSize: '1024x1024', // Wrong parameter name
});
const url = result.image.url; // Wrong response format
// ✅ AFTER (correct, from docs):
const result = await fal.run('fal-ai/flux/dev', {
prompt: text,
image_size: '1024x1024', // Correct snake_case
num_inference_steps: 28, // Added recommended param
});
const url = result.images[0].url; // Correct array response
Prevention:
- ALWAYS run Phase 5 (Firecrawl Verification) before production testing
- Use Firecrawl to scrape latest docs for ALL APIs used
- Compare implementation with scraped documentation
- Fix ALL mismatches before deploying
- Test with real API calls to verify
- Check Axiom after deployment to confirm no API errors
Quick Fix:
# Launch Firecrawl verification agent
# Task: Verify all external API integrations match latest documentation
# - Scrape: FAL AI, Replicate, OpenAI, Supabase docs
# - Compare: endpoint URLs, parameters, auth headers, response formats
# - Report: mismatches and required fixes
Pitfall 9: Model Name Mismatches (VERY COMMON)
Symptom: API calls fail with "model not found" or use deprecated models
Root Cause: Code uses outdated or incorrect model names
Common Mismatches:
- Using
dalle-3instead ofgpt-image-1 - Using
ideogram-v2instead ofideogram-v3 - Using generic names like
veoinstead ofveo-3 - Using
flux-devinstead offal-ai/flux/dev - Missing version numbers in model IDs
Detection:
# Search for model references
grep -ri "gpt-\|dall\|claude\|gemini\|imagen\|ideogram\|flux\|veo\|minimax" lib/ app/ | grep -v node_modules
# Check Axiom for model-related errors
mcp__axiom__queryApl("['nonlinear-editor'] | where ['_time'] > ago(1h) | where ['message'] contains 'model' and ['severity'] == 'error'")
Fix: Run model validation (Phase 5, Step 5.0.7) before deployment
Prevention:
- Always use Firecrawl to verify latest model names from official docs
- Create model validation matrix (Step 5.0.7)
- Never copy-paste model names from old examples or community posts
- Check API provider's official documentation for current model IDs
- Update model constants regularly as new versions release
Pitfall 10: Next.js Client/Server Boundary Violations (CRITICAL)
Symptom: Build succeeds locally, but production API routes fail with:
Error: Attempted to call [functionName] from the server but [functionName] is on the client.
Root Cause: Server-only code marked with 'use client' directive, or client-only code used in server contexts
Common Scenarios:
'use client'in files exporting server functions// ❌ lib/logger.ts 'use client' // This breaks everything! export function createServerLogger() { ... } // Now client-only!Server utilities imported in client components
// ❌ components/MyComponent.tsx 'use client'; import { createServerLogger } from '@/lib/logger'; // Fails!Browser APIs in API routes
// ❌ app/api/foo/route.ts export async function POST() { const data = localStorage.getItem('key'); // No localStorage on server! }
Why Builds Succeed Locally:
- Next.js build process doesn't catch all boundary violations
- TypeScript doesn't understand
'use client'semantics - Error only manifests when server-side code actually executes
- Local development may use different bundling strategy
Detection:
# Check for 'use client' in server-only files
grep -rn "use client" lib/logger.ts lib/auth/ lib/services/ lib/database/
# Check for server-only imports in client components
grep -l "use client" app/ components/ | xargs grep -l "createServerLogger\|from 'crypto'"
# Check API routes for browser APIs
find app/api -name "route.ts" | xargs grep -E "(window\.|document\.|localStorage)"
# Run full validation (Step 4.5)
# See Phase 4, Step 4.5 for complete automated checks
Solution:
Remove incorrect
'use client'directives// ✅ lib/logger.ts - NO 'use client' export function createServerLogger() { ... } // Server-only now!Split mixed files into client/server versions
// ✅ lib/logger.server.ts export function createServerLogger() { ... } // ✅ lib/logger.client.ts 'use client' export function createClientLogger() { ... }Use dynamic imports for client components
// ✅ app/page.tsx import dynamic from 'next/dynamic'; const ClientComponent = dynamic(() => import('./ClientComponent'), { ssr: false, });Move browser APIs to client components
// ✅ app/api/foo/route.ts - Use server APIs only export async function POST() { const data = await getFromDatabase(); // Server-only! }
Full Fix Procedure:
# 1. Run automated boundary checks (Phase 4, Step 4.5)
# See Step 4.5 for complete script
# 2. Review and fix all violations
# Remove incorrect 'use client' directives
# Split mixed files if needed
# Update imports
# 3. Verify fix
rm -rf .next
npm run build
# 4. Test in preview deployment
git add -A && git commit -m "fix: Remove incorrect 'use client' directives"
git push
# Wait for Vercel deployment
# Test API routes in production
# 5. Monitor Axiom for errors
mcp__axiom__queryApl("['nonlinear-editor'] | where ['_time'] > ago(10m) | where ['message'] contains 'client' and ['severity'] == 'error'")
Prevention:
- ALWAYS run Step 4.5 before production deployment
- Use automated checks in CI/CD pipeline
- Code review checklist: Check for
'use client'in server files - Establish naming conventions:
*.server.ts- Server-only code (no'use client')*.client.ts- Client-only code (always'use client')*.ts- Shared utilities (careful with directives)
- Document server-only functions with JSDoc:
/** * Server-only function. DO NOT import in client components. * @server-only */ export function createServerLogger() { ... }
Real-World Example (October 2025):
During a refactor, lib/logger.ts accidentally got 'use client' directive at line 38. This made createServerLogger() client-only. All API routes importing it failed in production with the boundary error. Build succeeded with no warnings. Issue caught only after production deployment.
Impact: All API routes broken, complete production failure.
Fix: Removed 'use client' directive, cleared .next cache, rebuilt, redeployed.
Lesson: Step 4.5 now prevents this class of errors before deployment.
Agent Swarm Patterns
Pattern 1: Parallel Diagnosis (5 agents)
Use when: Something is broken and you don't know where
Launch 5 agents simultaneously to check:
- Frontend code
- API route
- Database state
- Type definitions
- Production logs
Pattern 2: Sequential Implementation (3-4 agents)
Use when: Building complex feature step-by-step
Agent 1: Implement data layer (types, migrations)
↓ wait for completion
Agent 2: Implement API layer (routes, middleware)
↓ wait for completion
Agent 3: Implement UI layer (components, hooks)
↓ wait for completion
Agent 4: Validate entire feature
Pattern 3: Breadth-First Feature Coverage (4-6 agents)
Use when: Implementing multiple independent features
Launch agents in parallel, one per feature:
Agent 1: Chat interface
Agent 2: Model selection
Agent 3: Image upload
Agent 4: Settings panel
Agent 5: Export functionality
Agent 6: Validation of all features
Success Criteria
A refactoring is complete when:
Code Quality:
- ✅ All features from reference implementation work
- ✅ Build succeeds with no errors
- ✅ All tests pass
- ✅ No duplicate routes created
- ✅ No ESLint warnings
Environment Variables:
- ✅ All environment variables verified against current repo's .env.example
- ✅ No variable name mismatches (e.g., FAL_KEY vs FAL_API_KEY)
- ✅ All required variables present in .env.local
- ✅ User notified of any missing variables before refactoring started
- ✅ Code updated to use correct variable names from current repo
- ✅ New variables added to .env.example with documentation
- ✅ Environment validated with
npm run validate:env
Queue & Architecture:
- ✅ All AI generation uses Supabase queues (no direct API calls in routes)
- ✅ Job types added to database enum
- ✅ Workers implemented for background processing
- ✅ Frontend polls job status correctly
API Verification (via Firecrawl):
- ✅ All model IDs match latest documentation
- ✅ All required parameters present in API calls
- ✅ Parameter names and types correct
- ✅ Authentication headers use correct format
- ✅ Endpoint URLs are current (not deprecated)
- ✅ Response parsing handles all documented formats
- ✅ No hardcoded credentials or staging URLs
- ✅ Firecrawl verification agent completed with 0 mismatches
Feature Parity (100% required):
- ✅ Validation agent verified ALL features from reference work
- ✅ Feature parity report shows 100% score (X/X features)
- ✅ Side-by-side comparison confirms identical behavior
- ✅ All user flows tested and working
- ✅ All edge cases handled
- ✅ No missing functionality
UI/UX Match:
- ✅ Layout matches reference OR documented improvements approved
- ✅ Component placement identical to reference
- ✅ Colors, fonts, spacing match reference
- ✅ Responsive behavior matches at all breakpoints
- ✅ Animations and transitions work identically
- ✅ Loading states match reference
- ✅ Error states match reference
- ✅ Success states match reference
- ✅ Modal/dialog behavior identical
- ✅ Keyboard navigation works same way
Runtime Verification:
- ✅ No console errors in browser (Chrome DevTools verified)
- ✅ All API endpoints return 2xx responses
- ✅ Database records created correctly
- ✅ Rate limiting works (no fallback to memory - verified in Axiom)
- ✅ Authentication works
Production Validation (via Chrome DevTools MCP):
- ✅ Axiom shows 0 errors in last 30 minutes
- ✅ No rate limit fallback messages in logs
- ✅ Jobs created and processed successfully
- ✅ Loading states show properly
- ✅ Error states handled gracefully
- ✅ Success states trigger correctly
Comprehensive Validation Passed:
- ✅ Validation agent completed full assessment
- ✅ Feature parity report delivered (100% score required)
- ✅ UI/UX match report delivered
- ✅ All test results documented
- ✅ Performance benchmarks meet or exceed reference
- ✅ Accessibility standards verified
- ✅ Security review passed
Documentation:
- ✅ ISSUES.md updated
- ✅ API documentation updated
- ✅ Feature parity report saved to docs/reports/
- ✅ UI/UX validation screenshots captured
- ✅ Changes committed and pushed
Recursive Validation Passed:
- ✅ At least 3 iterations of production testing completed
- ✅ All Axiom errors fixed recursively
- ✅ Final validation shows clean logs
Example: Parallel Frame Generator Refactoring
This skill was created based on refactoring the Parallel Frame Generator feature. Here's what we learned:
Issues Found:
- Type mismatch in types/supabase.ts - Function signatures didn't match migrations
- Branded type comparison failures - Set comparisons always failed
- Database enum missing values - TypeScript enum had values DB didn't
- Missing imports - Components used hooks without importing them
Solution Pattern:
- Launched 5 diagnostic agents in parallel
- Found issues in types, database, and components
- Fixed types first (most critical - affects all routes)
- Applied database migration
- Fixed component imports
- Cleared cache and rebuilt
- Validated end-to-end
Key Insight:
Always check types/supabase.ts for consistency with migrations! This file is auto-generated but can get out of sync, causing cascading failures across the entire application.
Quick Reference Commands
# Axiom Verification (ALWAYS CHECK FIRST!)
mcp__axiom__listDatasets() # List available datasets
mcp__axiom__getDatasetSchema({ datasetName: 'nonlinear-editor' }) # Get schema
mcp__axiom__queryApl({ query: "['nonlinear-editor'] | where ['_time'] > ago(1h) | count" }) # Test query
mcp__axiom__queryApl({ query: "['nonlinear-editor'] | where ['_time'] > ago(10m) | where ['severity'] == 'error' | limit 20" }) # Check recent errors
# Pre-Implementation Checks
find app/api -name "route.ts" # Find all routes
grep -r "export async function POST" app/api/ # Find POST handlers
grep -r "fal.run\|replicate.run" app/api/ # Check for direct AI calls
supabase db remote sql "SELECT enumlabel FROM pg_enum JOIN pg_type ON pg_enum.enumtypid = pg_type.oid WHERE pg_type.typname = 'job_type'" # Check job types
# Rate Limit Verification
grep -A 5 "CREATE OR REPLACE FUNCTION increment_rate_limit" supabase/migrations/*.sql # Check migration
grep -A 10 "increment_rate_limit" types/supabase.ts # Check types
grep -A 5 "supabase.rpc('increment_rate_limit'" lib/rateLimit.ts # Check RPC calls
# Queue Verification
supabase db remote sql "SELECT COUNT(*) FROM processing_jobs WHERE created_at > NOW() - INTERVAL '1 hour'" # Check recent jobs
supabase db remote sql "SELECT DISTINCT job_type, COUNT(*) FROM processing_jobs GROUP BY job_type" # List job types in use
# API Verification with Firecrawl
mcp__firecrawl__firecrawl_scrape({ url: 'https://fal.ai/models/fal-ai/flux/dev', formats: ['markdown'] }) # Scrape FAL docs
mcp__firecrawl__firecrawl_search({ query: 'fal ai flux dev api parameters', limit: 5 }) # Search for API params
grep -r "fal.run\|replicate.run" lib/services/ # Find API calls in services
grep -r "Authorization.*Key\|X-API-Key" lib/services/ # Check auth headers
# Diagnosis
mcp__axiom__queryApl() # Check production logs
supabase migration list # Check DB state
npm run build # Check for type errors
gh run list # Check CI/CD
# Next.js Client/Server Boundary Checks (CRITICAL - Phase 4, Step 4.5)
grep -rn "use client" lib/logger.ts lib/auth/ lib/services/ lib/database/ # Check server files
grep -l "use client" app/ components/ | xargs grep -l "createServerLogger" # Check client imports
find app/api -name "route.ts" | xargs grep -E "(window\.|localStorage)" # Check API routes
# Fix
supabase db push # Apply migrations
rm -rf .next # Clear cache
npm run lint -- --fix # Fix linting
# Production Testing
mcp__chrome_devtools__navigate_page() # Navigate to production
mcp__chrome_devtools__list_console_messages() # Check console errors
mcp__chrome_devtools__list_network_requests() # Check network requests
mcp__axiom__queryApl("['nonlinear-editor'] | where ['_time'] > ago(10m) | where ['severity'] == 'error'") # Check Axiom errors
# Validate
npm run build # Full build
npm run test # Run tests
git diff # Review changes
Integration with Other Skills
This skill works well with:
- debug-nonlinear-editor - Use when bugs are found during refactoring
- code-validator - Use after implementation to validate changes
- maintain-coding-standards - Use during implementation to follow patterns
- code-maintenance - Use for cleanup after refactoring
Remember: When user says "create swarms" or "use parallel agents" for feature work, this is the skill to use!