| name | ce-compound |
| description | Document a recently solved problem to compound your team's knowledge |
/ce-compound
Coordinate multiple subagents working in parallel to document a recently solved problem.
Purpose
Captures problem solutions while context is fresh, creating structured documentation in docs/solutions/ with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.
Why "compound"? Each documented solution compounds your team's knowledge. The first time you solve a problem takes research. Document it, and the next occurrence takes minutes. Knowledge compounds.
Usage
/ce-compound # Document the most recent fix
/ce-compound [brief context] # Provide additional context hint
Pre-resolved context
Repo name (pre-resolved): !common=$(git rev-parse --path-format=absolute --git-common-dir 2>/dev/null); repo="${common%/.git}"; echo "${repo##*/}"
Git branch (pre-resolved): !git rev-parse --abbrev-ref HEAD 2>/dev/null || true
If the lines above resolved to plain values (a folder name like my-repo and a branch name like feat/my-branch), pass them into the Session Historian dispatch in Phase 1 so the agent does not waste a turn deriving them. If they still contain backtick command strings or are empty, omit them from the dispatch and let the agent derive them at runtime.
Support Files
These files are the durable contract for the workflow. Read them on-demand at the step that needs them — do not bulk-load at skill start.
references/schema.yaml— canonical frontmatter fields and enum values (read when validating YAML)references/yaml-schema.md— category mapping from problem_type to directory (read when classifying)assets/resolution-template.md— section structure for new docs (read when assembling)
When spawning subagents, pass the relevant file contents into the task prompt so they have the contract without needing cross-skill paths.
Execution Strategy
Present the user with two options before proceeding, using the platform's blocking question tool: AskUserQuestion in Claude Code (call ToolSearch with select:AskUserQuestion first if its schema isn't loaded), request_user_input in Codex, ask_user in Gemini, ask_user in Pi (requires the pi-ask-user extension). Fall back to presenting options in chat only when no blocking tool exists in the harness or the call errors (e.g., Codex edit modes) — not because a schema load is required. Never silently skip the question.
1. Full (recommended) — the complete compound workflow. Researches,
cross-references, and reviews your solution to produce documentation
that compounds your team's knowledge.
2. Lightweight — same documentation, single pass. Faster and uses
fewer tokens, but won't detect duplicates or cross-reference
existing docs. Best for simple fixes or long sessions nearing
context limits.
Do NOT pre-select a mode. Do NOT skip this prompt. Wait for the user's choice before proceeding.
If the user chooses Full, ask one follow-up question before proceeding. Detect which harness is running (Claude Code, Codex, or Cursor) and ask:
Would you also like to search your [harness name] session history
for relevant knowledge to help the Compound process? This adds
time and token usage.
If the user says yes, dispatch the Session Historian in Phase 1. If no, skip it. Do not ask this in lightweight mode.
Full Mode
Phase 1 subagents return TEXT DATA to the orchestrator. They must NOT use Write, Edit, or create any files. Only the orchestrator writes files: the solution doc in Phase 2, and — if the Discoverability Check finds a gap — a small edit to a project instruction file (AGENTS.md or CLAUDE.md). The instruction-file edit is maintenance, not a second deliverable; it ensures future agents can discover the knowledge store.
Phase 0.5: Auto Memory Scan
Before launching Phase 1 subagents, check the auto-memory block injected into your system prompt for notes relevant to the problem being documented.
- Look for a block labeled "user's auto-memory" (Claude Code only) already present in your system prompt context — MEMORY.md's entries are inlined there
- If the block is absent, empty, or this is a non-Claude-Code platform, skip this step and proceed to Phase 1 unchanged
- Scan the entries for anything related to the problem being documented -- use semantic judgment, not keyword matching
- If relevant entries are found, prepare a labeled excerpt block:
## Supplementary notes from auto memory
Treat as additional context, not primary evidence. Conversation history
and codebase findings take priority over these notes.
[relevant entries here]
- Pass this block as additional context to the Context Analyzer and Solution Extractor task prompts in Phase 1. If any memory notes end up in the final documentation (e.g., as part of the investigation steps or root cause analysis), tag them with "(auto memory [claude])" so their origin is clear to future readers.
If no relevant entries are found, proceed to Phase 1 without passing memory context.
Phase 1: Research
Launch research subagents. Each returns text data to the orchestrator.
Dispatch order:
- Launch
Context Analyzer,Solution Extractor, andRelated Docs Finderin parallel (background) - Then dispatch
ce-session-historianin foreground — it reads session files outside the working directory that background agents may not have access to - The foreground dispatch runs while the background agents work, adding no wall-clock time
1. Context Analyzer
- Extracts conversation history
- Reads
references/schema.yamlfor enum validation and track classification - Determines the track (bug or knowledge) from the problem_type
- Identifies problem type, component, and track-appropriate fields:
- Bug track: symptoms, root_cause, resolution_type
- Knowledge track: applies_when (symptoms/root_cause/resolution_type optional)
- Incorporates auto memory excerpts (if provided by the orchestrator) as supplementary evidence
- Reads
references/yaml-schema.mdfor category mapping intodocs/solutions/ - Suggests a filename using the pattern
[sanitized-problem-slug]-[date].md - Returns: YAML frontmatter skeleton (must include
category:field mapped from problem_type), category directory path, suggested filename, and which track applies - Does not invent enum values, categories, or frontmatter fields from memory; reads the schema and mapping files above
- Does not force bug-track fields onto knowledge-track learnings or vice versa
2. Solution Extractor
- Reads
references/schema.yamlfor track classification (bug vs knowledge) - Adapts output structure based on the problem_type track
- Incorporates auto memory excerpts (if provided by the orchestrator) as supplementary evidence -- conversation history and the verified fix take priority; if memory notes contradict the conversation, note the contradiction as cautionary context
Bug track output sections:
- Problem: 1-2 sentence description of the issue
- Symptoms: Observable symptoms (error messages, behavior)
- What Didn't Work: Failed investigation attempts and why they failed
- Solution: The actual fix with code examples (before/after when applicable)
- Why This Works: Root cause explanation and why the solution addresses it
- Prevention: Strategies to avoid recurrence, best practices, and test cases. Include concrete code examples where applicable (e.g., gem configurations, test assertions, linting rules)
Knowledge track output sections:
- Context: What situation, gap, or friction prompted this guidance
- Guidance: The practice, pattern, or recommendation with code examples when useful
- Why This Matters: Rationale and impact of following or not following this guidance
- When to Apply: Conditions or situations where this applies
- Examples: Concrete before/after or usage examples showing the practice in action
3. Related Docs Finder
- Searches
docs/solutions/for related documentation - Identifies cross-references and links
- Finds related GitHub issues
- Flags any related learning or pattern docs that may now be stale, contradicted, or overly broad
- Assesses overlap with the new doc being created across five dimensions: problem statement, root cause, solution approach, referenced files, and prevention rules. Score as:
- High: 4-5 dimensions match — essentially the same problem solved again
- Moderate: 2-3 dimensions match — same area but different angle or solution
- Low: 0-1 dimensions match — related but distinct
- Returns: Links, relationships, refresh candidates, and overlap assessment (score + which dimensions matched)
Search strategy (grep-first filtering for efficiency):
- Extract keywords from the problem context: module names, technical terms, error messages, component types
- If the problem category is clear, narrow search to the matching
docs/solutions/<category>/directory - Use the native content-search tool (e.g., Grep in Claude Code) to pre-filter candidate files BEFORE reading any content. Run multiple searches in parallel, case-insensitive, targeting frontmatter fields. These are template patterns -- substitute actual keywords:
title:.*<keyword>tags:.*(<keyword1>|<keyword2>)module:.*<module name>component:.*<component>
- If search returns >25 candidates, re-run with more specific patterns. If <3, broaden to full content search
- Read only frontmatter (first 30 lines) of candidate files to score relevance
- Fully read only strong/moderate matches
- Return distilled links and relationships, not raw file contents
GitHub issue search:
Prefer the gh CLI for searching related issues: gh issue list --search "<keywords>" --state all --limit 5. If gh is not installed, fall back to the GitHub MCP tools (e.g., unblocked data_retrieval) if available. If neither is available, skip GitHub issue search and note it was skipped in the output.
4. Session Historian (foreground, after launching the above — only if the user opted in)
- Skip entirely if the user declined session history in the follow-up question
- Dispatched as
ce-session-historian - Dispatch in foreground — this agent reads session files outside the working directory (
~/.claude/projects/,~/.codex/sessions/,~/.cursor/projects/) which background agents may not have access to - Omit the
modeparameter so the user's configured permission settings apply - Dispatch on the mid-tier model (e.g.,
model: "sonnet"in Claude Code) — the synthesis feeds into compound assembly and doesn't need frontier reasoning
Dispatch prompt — keep tight. A long, keyword-rich prompt licenses the agent to keep widening. Use this shape:
Pre-resolved context (only if values resolved cleanly above; otherwise omit and let the agent derive): repo name, current git branch.
Time window: explicit
7 daysunless the documented problem clearly spans a longer arc.Problem topic: one sentence naming the concrete issue — error message, module name, what broke and how it was fixed. Not a paragraph; not a bullet list of related topics.
Filter rule (one line): "Only surface findings directly relevant to this specific problem. Ignore unrelated work from the same sessions or branches."
Output schema:
Structure your response with these sections (omit any with no findings): - What was tried before - What didn't work - Key decisions - Related context
Do not append additional context blocks, exclusion lists, or topic-keyword bullets — verbose dispatch prompts give the agent license to keep widening the search and rapidly compound wall time. If the agent needs keyword search, it owns that decision via the --keyword mode on ce-session-inventory.
- Returns: structured digest of findings from prior sessions, or "no relevant prior sessions" if none found
Phase 2: Assembly & Write
WAIT for all Phase 1 subagents to complete before proceeding.
The orchestrating agent (main conversation) performs these steps:
Collect all text results from Phase 1 subagents
Check the overlap assessment from the Related Docs Finder before deciding what to write:
Overlap Action High — existing doc covers the same problem, root cause, and solution Update the existing doc with fresher context (new code examples, updated references, additional prevention tips) rather than creating a duplicate. The existing doc's path and structure stay the same. Moderate — same problem area but different angle, root cause, or solution Create the new doc normally. Flag the overlap for Phase 2.5 to recommend consolidation review. Low or none Create the new doc normally. The reason to update rather than create: two docs describing the same problem and solution will inevitably drift apart. The newer context is fresher and more trustworthy, so fold it into the existing doc rather than creating a second one that immediately needs consolidation.
When updating an existing doc, preserve its file path and frontmatter structure. Update the solution, code examples, prevention tips, and any stale references. Add a
last_updated: YYYY-MM-DDfield to the frontmatter. Do not change the title unless the problem framing has materially shifted.Incorporate session history findings (if available). When the Session History Researcher returned relevant prior-session context:
- Fold investigation dead ends and failed approaches into the What Didn't Work section (bug track) or Context section (knowledge track)
- Use cross-session patterns to enrich the Prevention or Why This Matters sections
- Tag session-sourced content with "(session history)" so its origin is clear to future readers
- If findings are thin or "no relevant prior sessions," proceed without session context
Assemble complete markdown file from the collected pieces, reading
assets/resolution-template.mdfor the section structure of new docsValidate YAML frontmatter against
references/schema.yaml, including the YAML-safety quoting rule for array items (seereferences/yaml-schema.md> YAML Safety Rules)Create directory if needed:
mkdir -p docs/solutions/[category]/Write the file: either the updated existing doc or the new
docs/solutions/[category]/[filename].mdRun
python3 scripts/validate-frontmatter.py <output-path>to catch silent-corruption parser-safety issues that the prose rules miss: malformed---delimiter lines, unquoted#in scalar values (silent comment truncation), and unquoted:in scalar values (silent mapping confusion). Exit 0 means the doc is parser-safe; exit 1 means the script's stderr names the offending field(s) and what to fix — quote the value(s), re-write the doc, and re-run until exit 0. Do not declare success while validation fails. The script does not enforce schema rules and does not flag YAML reserved-indicator characters (those produce loud parser errors downstream rather than silent corruption — out of scope). Uses Python 3 stdlib only (no PyYAML or other deps).
When creating a new doc, preserve the section order from assets/resolution-template.md unless the user explicitly asks for a different structure.
Phase 2.5: Selective Refresh Check
After writing the new learning, decide whether this new solution is evidence that older docs should be refreshed.
ce-compound-refresh is not a default follow-up. Use it selectively when the new learning suggests an older learning or pattern doc may now be inaccurate.
It makes sense to invoke ce-compound-refresh when one or more of these are true:
- A related learning or pattern doc recommends an approach that the new fix now contradicts
- The new fix clearly supersedes an older documented solution
- The current work involved a refactor, migration, rename, or dependency upgrade that likely invalidated references in older docs
- A pattern doc now looks overly broad, outdated, or no longer supported by the refreshed reality
- The Related Docs Finder surfaced high-confidence refresh candidates in the same problem space
- The Related Docs Finder reported moderate overlap with an existing doc — there may be consolidation opportunities that benefit from a focused review
It does not make sense to invoke ce-compound-refresh when:
- No related docs were found
- Related docs still appear consistent with the new learning
- The overlap is superficial and does not change prior guidance
- Refresh would require a broad historical review with weak evidence
Use these rules:
- If there is one obvious stale candidate, invoke
ce-compound-refreshwith a narrow scope hint after the new learning is written - If there are multiple candidates in the same area, ask the user whether to run a targeted refresh for that module, category, or pattern set
- If context is already tight or you are in lightweight mode, do not expand into a broad refresh automatically; instead recommend
ce-compound-refreshas the next step with a scope hint
When invoking or recommending ce-compound-refresh, be explicit about the argument to pass. Prefer the narrowest useful scope:
- Specific file when one learning or pattern doc is the likely stale artifact
- Module or component name when several related docs may need review
- Category name when the drift is concentrated in one solutions area
- Pattern filename or pattern topic when the stale guidance lives in
docs/solutions/patterns/
Examples:
/ce-compound-refresh plugin-versioning-requirements/ce-compound-refresh payments/ce-compound-refresh performance-issues/ce-compound-refresh critical-patterns
A single scope hint may still expand to multiple related docs when the change is cross-cutting within one domain, category, or pattern area.
Do not invoke ce-compound-refresh without an argument unless the user explicitly wants a broad sweep.
Always capture the new learning first. Refresh is a targeted maintenance follow-up, not a prerequisite for documentation.
Discoverability Check
After the learning is written and the refresh decision is made, check whether the project's instruction files would lead an agent to discover and search docs/solutions/ before starting work in a documented area. This runs every time — the knowledge store only compounds value when agents can find it.
Identify which root-level instruction files exist (AGENTS.md, CLAUDE.md, or both). Read the file(s) and determine which holds the substantive content — one file may just be a shim that
@-includes the other (e.g.,CLAUDE.mdcontaining only@AGENTS.md, or vice versa). The substantive file is the assessment and edit target; ignore shims. If neither file exists, skip this check entirely.Assess whether an agent reading the instruction files would learn three things:
- That a searchable knowledge store of documented solutions exists
- Enough about its structure to search effectively (category organization, YAML frontmatter fields like
module,tags,problem_type) - When to search it (before implementing features, debugging issues, or making decisions in documented areas — learnings may cover bugs, best practices, workflow patterns, or other institutional knowledge)
This is a semantic assessment, not a string match. The information could be a line in an architecture section, a bullet in a gotchas section, spread across multiple places, or expressed without ever using the exact path
docs/solutions/. Use judgment — if an agent would reasonably discover and use the knowledge store after reading the file, the check passes.If the spirit is already met, no action needed — move on.
If not: a. Based on the file's existing structure, tone, and density, identify where a mention fits naturally. Before creating a new section, check whether the information could be a single line in the closest related section — an architecture tree, a directory listing, a documentation section, or a conventions block. A line added to an existing section is almost always better than a new headed section. Only add a new section as a last resort when the file has clear sectioned structure and nothing is even remotely related. b. Draft the smallest addition that communicates the three things. Match the file's existing style and density. The addition should describe the knowledge store itself, not the plugin — an agent without the plugin should still find value in it.
Keep the tone informational, not imperative. Express timing as description, not instruction — "relevant when implementing or debugging in documented areas" rather than "check before implementing or debugging." Imperative directives like "always search before implementing" cause redundant reads when a workflow already includes a dedicated search step. The goal is awareness: agents learn the folder exists and what's in it, then use their own judgment about when to consult it.
Examples of calibration (not templates — adapt to the file):
When there's an existing directory listing or architecture section — add a line:
docs/solutions/ # documented solutions to past problems (bugs, best practices, workflow patterns), organized by category with YAML frontmatter (module, tags, problem_type)When nothing in the file is a natural fit — a small headed section is appropriate:
## Documented Solutions `docs/solutions/` — documented solutions to past problems (bugs, best practices, workflow patterns), organized by category with YAML frontmatter (`module`, `tags`, `problem_type`). Relevant when implementing or debugging in documented areas.c. In full mode, explain to the user why this matters — agents working in this repo (including fresh sessions, other tools, or collaborators without the plugin) won't know to check
docs/solutions/unless the instruction file surfaces it. Show the proposed change and where it would go, then use the platform's blocking question tool to get consent before making the edit:AskUserQuestionin Claude Code (callToolSearchwithselect:AskUserQuestionfirst if its schema isn't loaded),request_user_inputin Codex,ask_userin Gemini,ask_userin Pi (requires thepi-ask-userextension). Fall back to presenting the proposal in chat only when no blocking tool exists in the harness or the call errors (e.g., Codex edit modes) — not because a schema load is required. Never silently skip the question. In lightweight mode, output a one-liner note and move on
Phase 3: Optional Enhancement
WAIT for Phase 2 to complete before proceeding.
Based on problem type, optionally invoke specialized agents to review the documentation:
- performance_issue →
ce-performance-oracle - security_issue →
ce-security-sentinel - database_issue →
ce-data-integrity-guardian - Any code-heavy issue → always run
ce-code-simplicity-reviewer, and additionally run the kieran reviewer that matches the repo's primary stack:- Ruby/Rails → also run
ce-kieran-rails-reviewer - Python → also run
ce-kieran-python-reviewer - TypeScript/JavaScript → also run
ce-kieran-typescript-reviewer - Other stacks → no kieran reviewer needed
- Ruby/Rails → also run
Lightweight Mode
This mode skips parallel subagents entirely. The orchestrator performs all work in a single pass, producing the same solution document without cross-referencing or duplicate detection.
The orchestrator (main conversation) performs ALL of the following in one sequential pass:
- Extract from conversation: Identify the problem and solution from conversation history. Also scan the "user's auto-memory" block injected into your system prompt, if present (Claude Code only) -- use any relevant notes as supplementary context alongside conversation history. Tag any memory-sourced content incorporated into the final doc with "(auto memory [claude])"
- Classify: Read
references/schema.yamlandreferences/yaml-schema.md, then determine track (bug vs knowledge), category, and filename - Write minimal doc: Create
docs/solutions/[category]/[filename].mdusing the appropriate track template fromassets/resolution-template.md, with:- YAML frontmatter with track-appropriate fields, applying the YAML-safety quoting rule for array items (see
references/yaml-schema.md> YAML Safety Rules) - Bug track: Problem, root cause, solution with key code snippets, one prevention tip
- Knowledge track: Context, guidance with key examples, one applicability note
- YAML frontmatter with track-appropriate fields, applying the YAML-safety quoting rule for array items (see
- Skip specialized agent reviews (Phase 3) to conserve context
Lightweight output:
✓ Documentation complete (lightweight mode)
File created:
- docs/solutions/[category]/[filename].md
[If discoverability check found instruction files don't surface the knowledge store:]
Tip: Your AGENTS.md/CLAUDE.md doesn't surface docs/solutions/ to agents —
a brief mention helps all agents discover these learnings.
Note: This was created in lightweight mode. For richer documentation
(cross-references, detailed prevention strategies, specialized reviews),
re-run /ce-compound in a fresh session.
No subagents are launched. No parallel tasks. One file written.
In lightweight mode, the overlap check is skipped (no Related Docs Finder subagent). This means lightweight mode may create a doc that overlaps with an existing one. That is acceptable — ce-compound-refresh will catch it later. Only suggest ce-compound-refresh if there is an obvious narrow refresh target. Do not broaden into a large refresh sweep from a lightweight session.
What It Captures
- Problem symptom: Exact error messages, observable behavior
- Investigation steps tried: What didn't work and why
- Root cause analysis: Technical explanation
- Working solution: Step-by-step fix with code examples
- Prevention strategies: How to avoid in future
- Cross-references: Links to related issues and docs
Preconditions
What It Creates
Organized documentation:
- File:
docs/solutions/[category]/[filename].md
Categories auto-detected from problem:
Bug track:
- build-errors/
- test-failures/
- runtime-errors/
- performance-issues/
- database-issues/
- security-issues/
- ui-bugs/
- integration-issues/
- logic-errors/
Knowledge track:
- architecture-patterns/ — architectural or structural patterns (agent/skill/pipeline/workflow shape decisions)
- design-patterns/ — reusable non-architectural design approaches (content generation, interaction patterns, prompt shapes)
- tooling-decisions/ — language, library, or tool choices with durable rationale
- conventions/ — team-agreed way of doing something, captured so it survives turnover
- workflow-issues/
- developer-experience/
- documentation-gaps/
- best-practices/ — fallback only, use when no narrower knowledge-track value applies
Common Mistakes to Avoid
| ❌ Wrong | ✅ Correct |
|---|---|
Subagents write files like context-analysis.md, solution-draft.md |
Subagents return text data; orchestrator writes one final file |
| Research and assembly run in parallel | Research completes → then assembly runs |
| Multiple files created during workflow | One solution doc written or updated: docs/solutions/[category]/[filename].md (plus an optional small edit to a project instruction file for discoverability) |
| Creating a new doc when an existing doc covers the same problem | Check overlap assessment; update the existing doc when overlap is high |
Success Output
✓ Documentation complete
Auto memory: 2 relevant entries used as supplementary evidence
Subagent Results:
✓ Context Analyzer: Identified performance_issue in brief_system, category: performance-issues/
✓ Solution Extractor: 3 code fixes, prevention strategies
✓ Related Docs Finder: 2 related issues
✓ Session History: 3 prior sessions on same branch, 2 failed approaches surfaced
Specialized Agent Reviews (Auto-Triggered):
✓ ce-performance-oracle: Validated query optimization approach
✓ ce-kieran-rails-reviewer: Code examples meet Rails conventions
✓ ce-code-simplicity-reviewer: Solution is appropriately minimal
File created:
- docs/solutions/performance-issues/n-plus-one-brief-generation.md
This documentation will be searchable for future reference when similar
issues occur in the Email Processing or Brief System modules.
What's next?
1. Continue workflow (recommended)
2. Link related documentation
3. Update other references
4. View documentation
5. Other
After displaying the success output, present the "What's next?" options using the platform's blocking question tool: AskUserQuestion in Claude Code (call ToolSearch with select:AskUserQuestion first if its schema isn't loaded), request_user_input in Codex, ask_user in Gemini, ask_user in Pi (requires the pi-ask-user extension). Fall back to numbered options in chat only when no blocking tool exists in the harness or the call errors (e.g., Codex edit modes) — not because a schema load is required. Never silently skip the question. Do not continue the workflow or end the turn without the user's selection.
Alternate output (when updating an existing doc due to high overlap):
✓ Documentation updated (existing doc refreshed with current context)
Overlap detected: docs/solutions/performance-issues/n-plus-one-queries.md
Matched dimensions: problem statement, root cause, solution, referenced files
Action: Updated existing doc with fresher code examples and prevention tips
File updated:
- docs/solutions/performance-issues/n-plus-one-queries.md (added last_updated: 2026-03-24)
The Compounding Philosophy
This creates a compounding knowledge system:
- First time you solve "N+1 query in brief generation" → Research (30 min)
- Document the solution → docs/solutions/performance-issues/n-plus-one-briefs.md (5 min)
- Next time similar issue occurs → Quick lookup (2 min)
- Knowledge compounds → Team gets smarter
The feedback loop:
Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
↑ ↓
└──────────────────────────────────────────────────────────────────────┘
Each unit of engineering work should make subsequent units of work easier—not harder.
Auto-Invoke
Output
Writes the final learning directly into docs/solutions/.
Applicable Specialized Agents
Based on problem type, these agents can enhance documentation:
Code Quality & Review
- ce-kieran-rails-reviewer: Reviews code examples for Rails best practices
- ce-kieran-python-reviewer: Reviews code examples for Python best practices
- ce-kieran-typescript-reviewer: Reviews code examples for TypeScript best practices
- ce-code-simplicity-reviewer: Ensures solution code is minimal and clear
- ce-pattern-recognition-specialist: Identifies anti-patterns or repeating issues
Specific Domain Experts
- ce-performance-oracle: Analyzes performance_issue category solutions
- ce-security-sentinel: Reviews security_issue solutions for vulnerabilities
- ce-data-integrity-guardian: Reviews database_issue migrations and queries
Enhancement & Research
- ce-best-practices-researcher: Enriches solution with industry best practices
- ce-framework-docs-researcher: Links to framework/library documentation references
When to Invoke
- Auto-triggered (optional): Agents can run post-documentation for enhancement
- Manual trigger: User can invoke agents after /ce-compound completes for deeper review
Related Commands
/research [topic]- Deep investigation (searches docs/solutions/ for patterns)/ce-plan- Planning workflow (references documented solutions)