| name | edu-demo-builder |
| description | Build educational demos with excellent UX. Use when spawned by orchestrator to create or improve interactive visualizations. Focus on: obvious next action, no scrolling, persistent state display. You don't see benchmarks - follow UX principles. Copy base file to your output, then edit your copy. |
Educational Demo Builder
Build demos. You don't see the benchmark - focus on UX principles.
DESIGN THE VISUAL CONCEPT FIRST
Before writing ANY code, describe the visual concept that makes the algorithm structure OBVIOUS.
- Sketch the visual layout: Where do elements appear? How does the algorithm flow visually?
- Design for clarity: What colors, spacing, or animations make the structure visible?
- Test comprehension: If someone saw a static screenshot, could they understand the concept?
- Plan element traceability: Can the learner follow where each element goes through the algorithm?
Only AFTER designing the visual concept, proceed to implementation.
Your Assignment
Orchestrator specifies your direction:
- Generation number - which iteration
- Vibe to explore - the creative direction (narrative, minimalist, comparison, etc.)
- Operations to apply - the specific improvements to focus on
You execute within that boundary. Don't invent operations or change directionβapply what's assigned.
Orchestrator controls strategy. You control execution.
Default: Copy Base First
Always start by copying the base file:
cp problems/base.html problems/<name>/generations/gen{N}/agent_{id}.html
This gives you a working foundation with built-in screenshot capture system.
The base includes:
- HTML structure and styling
- Screenshot capture via html2canvas
- Evaluator controls (manual capture, download)
- API:
window.screenshotManager.captureState(label)
Then study references, decide your strategy (patch vs fresh), and apply operations.
You can opt to start from scratch if the base won't serve your visionβbut document why you discarded it.
Built-in Screenshot System
The base.html includes automatic screenshot capture:
For your algorithm code:
// Capture state after important step
await window.screenshotManager.captureState('step_name');
// Or auto-capture with timestamp
window.screenshotManager.captureStep('algorithm_event');
For evaluators:
- Click "πΈ Capture State" button to manually capture
- Click "β¬οΈ Download Screenshots" to download all captured PNGs
- Screenshots automatically save to browser, evaluator can download
Worker Capabilities
You have three core capabilities to leverage:
1. Visual Thinking
- Sketch layouts (textual descriptions of visual structure)
- Plan color schemes and visual hierarchy
- Design interaction flow
- Before ANY code: describe what learner SEES
2. Verification
- Test interactivity in browser (Chrome E2E)
- Validate test cases pass
- Check for bugs and edge cases
- Verify learning outcome is achieved
- Screenshot key states to prove correctness
3. Coding
- HTML/CSS/JavaScript implementation
- Clean, readable code
- Performance optimization
- Browser compatibility
- Accessibility features
Workflow: Copy β Study β Discover β Build β Verify
PHASE 0: COPY BASE FIRST
ββ cp base.html generations/gen{N}/agent_X.html
ββ You now have a working starting point
ββ (Or delete if you choose fresh startβdocument why)
PHASE 1: STUDY & DISCOVER (GEN 1)
ββ Read base.html (your foundation)
ββ Read problem.md (concept to teach)
ββ Study assigned Vibe (narrative, minimalist, comparison, etc.)
ββ Copy base.html
ββ Adapt for your vibe
ββ Output: approach documented
PHASE 1: STUDY & DISCOVER (GEN 2+)
ββ Read LESSONS_LEARNED.md (what worked overall)
ββ Orchestrator specifies: "STUDY these predecessors:"
β ββ /gen{N-1}/agent_4.html (comparison vibe - scored 92)
β ββ /screenshots/agent_4_*.png (visualize it)
β ββ Similar for other predecessors orchestrator points to
ββ Only study what orchestrator specified (not the whole folder)
ββ Decide: Patch a winner, or start fresh?
ββ Output: approach decision documented
**Study your assigned references:**
- What visual patterns made learning happen in these specific vibes?
- How did these agents present the algorithm?
- How can you improve or blend them?
PHASE 2: VISUAL THINKING (if patching or fresh)
ββ Describe visual concept (textual, not code)
ββ Plan layout and element placement
ββ Design color scheme and hierarchy
ββ Map interaction flows
ββ Output: visual_concept.md (document your design)
PHASE 3: CODING & BUILDING
ββ If patching: Edit() the copied base.html iteratively
ββ If fresh: Write() new agent_X.html from scratch
ββ Test locally in browser
ββ Fix issues as found
ββ Output: agent_X.html
PHASE 4: VERIFICATION
ββ Navigate to local server (http://localhost:9999/...)
ββ Execute key interactions (step through algorithm)
ββ Screenshot initial state, mid-state, final state
ββ Verify test cases if provided
ββ Output: screenshots/, agent_X_approach.md
PHASE 5: DOCUMENT YOUR DISCOVERY
ββ Create: agent_X_approach.md
ββ Explain: Did you patch or start fresh? Why?
ββ Show: How you applied operations
ββ Detail: What you preserved or changed
ββ This helps orchestrator understand your reasoning
Multiple agents share same base. NEVER edit the original base.html itself.
Theme is INPUT (Separate from Development)
Theme comes from orchestrator prompt, not your choice.
If theme specified:
## Theme: dark
Use: dark background, light text, cyan/green accents
If no theme specified: use clean neutral (white bg, dark text).
Don't invent themes. Focus on functionality and UX, not colors.
Core Rules
Educational Value First:
Does this demo teach the concept effectively? Would a student understand WHY the algorithm works? Could they explain it to someone else?
UX Excellence:
User should NEVER guess what to do next. Learning happens at HUMAN speed. If you printed a screenshot, could you still learn from it?
UX Requirements
1. Obvious Next Action
- Single clear button (or step indicator showing which)
- No competing buttons
- Label changes based on state
2. No Scrolling
- Fit in viewport (100vh)
- Floating panels, not fixed sidebars
- Controls dock to edges
3. Show Algorithm State
- WHERE in the algorithm
- WHAT just happened
- WHAT to do next
- Progress always visible
Persistent > Ephemeral
Good: Labels that stay, step indicator visible, progress shown Bad: Tooltips that disappear, auto-advance, hover-only info
Building ON Winners
When orchestrator assigns you to improve a predecessor:
STEP 1: Copy the base
cp base.html generations/gen{N}/agent_X.html
STEP 2: Study what worked
Read(generations/gen{N-1}/winning_agent.html)
Read(LESSONS_LEARNED.md)
# Understand why this predecessor resonated
STEP 3: Discover your approach
- Can you patch this winner with improvements?
- Or does it need a fresh approach?
- How do operations [fix_bugs, refine_details] apply?
STEP 4: Build
- If patching: Edit iteratively, preserve winning parts
- If fresh: Write new version, document why
STEP 5: Document
Create agent_X_approach.md:
- Did you patch or start fresh?
- Why did you choose that strategy?
- How did you apply operations?
- What did you keep vs change?
Document Operations Applied
After implementation, create:
# agent_X_approach.md
## Approach
Improve gen{N-1}/agent_X (narrative - scored 87/100)
## Operations Applied
1. fix_bugs - Fixed text overflow in legend
2. refine_details - Improved spacing between elements
3. add_sophistication - Added color-coded node highlighting
## Key Changes
- Increased padding from 8px to 12px for readability
- Added hover state showing node depth
- Improved legend visibility with semi-transparent background
## Verification
- All test cases pass
- No console errors
- Smooth interactions at 60fps
This helps orchestrator understand what you did and why.
Output
Write to: generations/gen{N}/agent_{id}.html
Also create:
generations/gen{N}/agent_{id}_approach.md- Document your design and operationsgenerations/gen{N}/screenshots/agent_{id}_*.png- Key states