| name | create-meta-prompts |
| description | Create optimized prompts for Claude-to-Claude pipelines with research, planning, and execution stages. Use when building prompts that produce outputs for other prompts to consume, or when running multi-stage workflows (research -> plan -> implement). |
Each prompt gets its own folder in .prompts/ with its output artifacts, enabling clear provenance and chain detection.
IF no context provided (skill invoked without description): → IMMEDIATELY use AskUserQuestion with:
- header: "Purpose"
- question: "What is the purpose of this prompt?"
- options:
- "Do" - Execute a task, produce an artifact
- "Plan" - Create an approach, roadmap, or strategy
- "Research" - Gather information or understand something
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
IF context was provided: → Check if purpose is inferable from keywords:
implement,build,create,fix,add,refactor→ Doplan,roadmap,approach,strategy,decide,phases→ Planresearch,understand,learn,gather,analyze,explore→ Research
→ If unclear, ask the Purpose question above as first contextual question → If clear, proceed to adaptive_analysis with inferred purpose
- Purpose: Do, Plan, or Research
- Topic identifier: Kebab-case identifier for file naming (e.g.,
auth,stripe-payments) - Complexity: Simple vs complex (affects prompt depth)
- Prompt structure: Single vs multiple prompts
If topic identifier not obvious, ask:
- header: "Topic"
- question: "What topic/feature is this for? (used for file naming)"
- Let user provide via "Other" option
- Enforce kebab-case (convert spaces/underscores to hyphens)
If found:
- List them: "Found existing files: auth-research.md (in 001-auth-research/), stripe-plan.md (in 005-stripe-plan/)"
- Use AskUserQuestion:
- header: "Reference"
- question: "Should this prompt reference any existing research or plans?"
- options: List found files + "None"
- multiSelect: true
Match by topic keyword when possible (e.g., "auth plan" → suggest auth-research.md).
Load questions from: references/question-bank.md
Route by purpose:
- Do → artifact type, scope, approach
- Plan → plan purpose, format, constraints
- Research → depth, sources, output format
- header: "Ready"
- question: "Ready to create the prompt?"
- options:
- "Proceed" - Create the prompt with current context
- "Ask more questions" - I have more details to clarify
- "Let me add context" - I want to provide additional information
Loop until "Proceed" selected.
"Creating a {purpose} prompt for: {topic} Folder: .prompts/{number}-{topic}-{purpose}/ References: {list any chained files}"
Then proceed to generation.
Load purpose-specific patterns:
- Do: references/do-patterns.md
- Plan: references/plan-patterns.md
- Research: references/research-patterns.md
Load intelligence rules: references/intelligence-rules.md
- Objective: What to accomplish, why it matters
- Context: Referenced files (@), dynamic context (!)
- Requirements: Specific instructions for the task
- Output specification: Where to save, what structure
- Metadata requirements: For research/plan outputs, specify XML metadata structure
- Success criteria: How to know it worked
For Research and Plan prompts, output must include:
<confidence>- How confident in findings<dependencies>- What's needed to proceed<open_questions>- What remains uncertain<assumptions>- What was assumed
After saving prompt(s), present inline (not AskUserQuestion):
What's next?
- Run prompt now
- Review/edit prompt first
- Save for later
- Other
Choose (1-4): _
</single_prompt_presentation>
<multi_prompt_presentation>
Prompts created:
- .prompts/001-auth-research/001-auth-research.md
- .prompts/002-auth-plan/002-auth-plan.md
- .prompts/003-auth-implement/003-auth-implement.md
Detected execution order: Sequential (002 references 001 output, 003 references 002 output)
What's next?
- Run all prompts (sequential)
- Review/edit prompts first
- Save for later
- Other
Choose (1-4): _
</multi_prompt_presentation>
</step_2_present>
<step_3_execute>
<title>Execution Engine</title>
<execution_modes>
<single_prompt>
Straightforward execution of one prompt.
1. Read prompt file contents
2. Spawn Task agent with subagent_type="general-purpose"
3. Include in task prompt:
- The complete prompt contents
- Output location: `.prompts/{number}-{topic}-{purpose}/{topic}-{purpose}.md`
4. Wait for completion
5. Validate output (see validation section)
6. Archive prompt to `completed/` subfolder
7. Report results with next-step options
</single_prompt>
<sequential_execution>
For chained prompts where each depends on previous output.
1. Build execution queue from dependency order
2. For each prompt in queue:
a. Read prompt file
b. Spawn Task agent
c. Wait for completion
d. Validate output
e. If validation fails → stop, report failure, offer recovery options
f. If success → archive prompt, continue to next
3. Report consolidated results
<progress_reporting>
Show progress during execution:
Executing 1/3: 001-auth-research... ✓ Executing 2/3: 002-auth-plan... ✓ Executing 3/3: 003-auth-implement... (running)
</progress_reporting>
</sequential_execution>
<parallel_execution>
For independent prompts with no dependencies.
1. Read all prompt files
2. **CRITICAL**: Spawn ALL Task agents in a SINGLE message
- This is required for true parallel execution
- Each task includes its output location
3. Wait for all to complete
4. Validate all outputs
5. Archive all prompts
6. Report consolidated results (successes and failures)
<failure_handling>
Unlike sequential, parallel continues even if some fail:
- Collect all results
- Archive successful prompts
- Report failures with details
- Offer to retry failed prompts
</failure_handling>
</parallel_execution>
<mixed_dependencies>
For complex DAGs (e.g., two parallel research → one plan).
1. Analyze dependency graph from @ references
2. Group into execution layers:
- Layer 1: No dependencies (run parallel)
- Layer 2: Depends only on layer 1 (run after layer 1 completes)
- Layer 3: Depends on layer 2, etc.
3. Execute each layer:
- Parallel within layer
- Sequential between layers
4. Stop if any dependency fails (downstream prompts can't run)
<example>
Layer 1 (parallel): 001-api-research, 002-db-research Layer 2 (after layer 1): 003-architecture-plan Layer 3 (after layer 2): 004-implement
</example>
</mixed_dependencies>
</execution_modes>
<dependency_detection>
<automatic_detection>
Scan prompt contents for @ references to determine dependencies:
1. Parse each prompt for `@.prompts/{number}-{topic}/` patterns
2. Build dependency graph
3. Detect cycles (error if found)
4. Determine execution order
<inference_rules>
If no explicit @ references found, infer from purpose:
- Research prompts: No dependencies (can parallel)
- Plan prompts: Depend on same-topic research
- Do prompts: Depend on same-topic plan
Override with explicit references when present.
</inference_rules>
</automatic_detection>
<missing_dependencies>
If a prompt references output that doesn't exist:
1. Check if it's another prompt in this session (will be created)
2. Check if it exists in `.prompts/*/` (already completed)
3. If truly missing:
- Warn user: "002-auth-plan references auth-research.md which doesn't exist"
- Offer: Create the missing research prompt first? / Continue anyway? / Cancel?
</missing_dependencies>
</dependency_detection>
<validation>
<output_validation>
After each prompt completes, verify success:
1. **File exists**: Check output file was created
2. **Not empty**: File has content (> 100 chars)
3. **Metadata present** (for research/plan): Check for required XML tags
- `<confidence>`
- `<dependencies>`
- `<open_questions>`
- `<assumptions>`
<validation_failure>
If validation fails:
- Report what's missing
- Offer options:
- Retry the prompt
- Continue anyway (for non-critical issues)
- Stop and investigate
</validation_failure>
</output_validation>
</validation>
<failure_handling>
<sequential_failure>
Stop the chain immediately:
✗ Failed at 2/3: 002-auth-plan
Completed:
- 001-auth-research ✓ (archived)
Failed:
- 002-auth-plan: Output file not created
Not started:
- 003-auth-implement
What's next?
- Retry 002-auth-plan
- View error details
- Stop here (keep completed work)
- Other
</sequential_failure>
<parallel_failure>
Continue others, report all results:
Parallel execution completed with errors:
✓ 001-api-research (archived)
✗ 002-db-research: Validation failed - missing
What's next?
- Retry failed prompt (002)
- View error details
- Continue without 002
- Other
</parallel_failure>
</failure_handling>
<archiving>
<archive_timing>
- **Sequential**: Archive each prompt immediately after successful completion
- Provides clear state if execution stops mid-chain
- **Parallel**: Archive all at end after collecting results
- Keeps prompts available for potential retry
<archive_operation>
Move prompt file to completed subfolder:
```bash
mv .prompts/{number}-{topic}-{purpose}/{number}-{topic}-{purpose}.md \
.prompts/{number}-{topic}-{purpose}/completed/
Output file stays in place (not moved).
Summary: [Brief description of what was produced]
What's next?
- View the output
- Create follow-up prompt (plan based on this research)
- Done
- Other
</single_result>
<chain_result>
✓ Chain completed: auth workflow
Results:
- 001-auth-research → .prompts/001-auth-research/auth-research.md [One-line summary]
- 002-auth-plan → .prompts/002-auth-plan/auth-plan.md [One-line summary]
- 003-auth-implement → Implementation complete [One-line summary of changes made]
All prompts archived to respective completed/ folders.
What's next?
- Review implementation
- Run tests
- Create new prompt chain
- Other
</chain_result>
</result_presentation>
<special_cases>
<re_running_completed>
If user wants to re-run an already-completed prompt:
1. Check if prompt is in `completed/` subfolder
2. Move it back to parent folder
3. Optionally backup existing output: `{output}.bak`
4. Execute normally
</re_running_completed>
<output_conflicts>
If output file already exists:
1. For re-runs: Backup existing → `{filename}.bak`
2. For new runs: Should not happen (unique numbering)
3. If conflict detected: Ask user - Overwrite? / Rename? / Cancel?
</output_conflicts>
<commit_handling>
After successful execution:
1. Do NOT auto-commit (user controls git workflow)
2. Mention what files were created/modified
3. User can commit when ready
Exception: If user explicitly requests commit, stage and commit:
- Output files created
- Prompts archived
- Any implementation changes (for Do prompts)
</commit_handling>
<recursive_prompts>
If a prompt's output includes instructions to create more prompts:
1. This is advanced usage - don't auto-detect
2. Present the output to user
3. User can invoke skill again to create follow-up prompts
4. Maintains user control over prompt creation
</recursive_prompts>
</special_cases>
</step_3_execute>
</automated_workflow>
<reference_guides>
**Prompt patterns by purpose:**
- [references/do-patterns.md](references/do-patterns.md) - Execution prompts + output structure
- [references/plan-patterns.md](references/plan-patterns.md) - Planning prompts + plan.md structure
- [references/research-patterns.md](references/research-patterns.md) - Research prompts + research.md structure
**Supporting references:**
- [references/question-bank.md](references/question-bank.md) - Intake questions by purpose
- [references/intelligence-rules.md](references/intelligence-rules.md) - Extended thinking, parallel tools, depth decisions
</reference_guides>
<success_criteria>
**Prompt Creation:**
- Intake gate completed with purpose and topic identified
- Chain detection performed, relevant files referenced
- Prompt generated with correct structure for purpose
- Folder created in `.prompts/` with correct naming
- Output file location specified in prompt
- Metadata requirements included for Research/Plan outputs
- Decision tree presented
**Execution (if user chooses to run):**
- Dependencies correctly detected and ordered
- Prompts executed in correct order (sequential/parallel/mixed)
- Output validated after each completion
- Failed prompts handled gracefully with recovery options
- Successful prompts archived to `completed/` subfolder
- Results presented with clear summaries and next-step options
</success_criteria>