| name | council |
| description | Orchestrates multi-model deliberation from 6 AI providers. INVOKE THIS SKILL when user wants: - Multiple AI perspectives: "ask the council", "get opinions", "what do the models think" - Debate/both sides: "both sides of", "pros and cons", "I'm torn between", "arguments for and against" - Stress-testing: "poke holes", "what could go wrong", "find flaws", "what am I missing", "blind spots" - Choosing between options: "help me choose", "which should I pick", "A vs B vs C" - Deep understanding: "deeply understand", "thorough research", "comprehensive analysis" - Direct model query: "ask Claude directly", "just Gemini", "Codex only", "skip the council" |
Council - Multi-Model Deliberation
Orchestrate up to 6 AI providers through multi-round deliberation, anonymous peer review, and synthesis.
Quick Start
# Auto mode (default) - LLM picks the best mode
python3 ${SKILL_ROOT}/scripts/council.py --query "[question]"
# Or specify mode explicitly
python3 ${SKILL_ROOT}/scripts/council.py --query "[question]" --mode consensus
Mode Selection
Default is auto (LLM picks mode). A single LLM call analyzes the query and selects:
- Mode - The best deliberation mode (consensus, debate, devil_advocate, vote)
- Reasoning Style - default or structured (3C Modified)
- Personas - Expert personas tailored to the specific question
Example auto-selection output:
ποΈ Auto-selection:
Mode: consensus
Style: default π
Personas: Elementary Mathematician, Logic Validator, Practical Calculator
Auto-selection logic:
| Query Type | Mode | Style | Personas |
|---|---|---|---|
| Security/architecture | devil_advocate | structured | Attacker, Defender, Synthesizer |
| Pros vs cons | debate | default | Champion, Skeptic, Arbiter |
| Multiple choice | vote | default | Domain experts |
| Simple questions | consensus | default | Complementary experts |
Inline mode hints are also detected:
- "Let's debate X vs Y" β debate
- "Play devil's advocate" β devil_advocate
- "Vote: option A or B?" β vote
You can override with --mode [mode_name] and --reasoning-style [style].
Modes Reference
Auto Mode (Default)
| Mode | Use When | Process |
|---|---|---|
auto |
Any query (default) | LLM analyzes query and picks best mode |
Classic Modes
| Mode | Use When | Process |
|---|---|---|
consensus |
Factual questions, design decisions | Multi-round with convergence detection |
debate |
Controversial topics, binary decisions | FOR vs AGAINST personas |
devil_advocate |
Stress-testing, security reviews | Red/Blue/Purple team analysis |
vote |
Multiple choice decisions | Weighted vote tally |
adaptive |
Uncertain complexity | Auto-escalates based on convergence |
Mode details: See references/modes.md
Direct Mode (Skip Deliberation)
Query individual models directly without multi-round deliberation:
# Single model
python3 ${SKILL_ROOT}/scripts/council.py --direct --models claude --query "[question]" --human
# Multiple models (sequential, no synthesis)
python3 ${SKILL_ROOT}/scripts/council.py --direct --models claude,gemini --query "[question]" --human
Detect direct mode from natural language:
| User says | Use |
|---|---|
| "What does Claude think about X?" | --direct --models claude |
| "Just Gemini's opinion please" | --direct --models gemini |
| "Quick answer from Codex" | --direct --models codex |
| "Skip the council, ask Claude" | --direct --models claude |
| "No debateβjust Gemini" | --direct --models gemini |
| "Claude only: is this correct?" | --direct --models claude |
| "Run this by Codex real quick" | --direct --models codex |
| "Get Claude and Gemini's take" | --direct --models claude,gemini |
| "All models, no synthesis" | --direct --models claude,gemini,codex |
| "Ask the council about X" | Full deliberation (no --direct) |
Trigger keywords for direct mode:
- Exclusivity: "only", "just", "solo", "single"
- Speed: "quick", "fast", "real quick"
- Bypass: "skip the council", "no debate", "no deliberation"
- Model-specific: "Claude thinks", "Gemini's opinion", "Codex's take"
Key Options
--mode MODE # consensus, debate, devil_advocate, vote, adaptive
--max-rounds N # Max deliberation rounds (default: 3)
--trail / --no-trail # Save full reasoning to Markdown (default: on)
--human # Human-readable output instead of JSON
--context-file PATH # Load code files via manifest
--reasoning-style STYLE # default or structured (3C Modified)
--models PROVIDERS # Comma-separated: claude,gemini,codex,opencode,qwen
Available Providers
| Provider | Models | Free Tier | Auth |
|---|---|---|---|
| claude | claude-sonnet-4, claude-opus-4, claude-haiku | - | claude (OAuth) |
| gemini | gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite | - | gemini (OAuth) |
| codex | gpt-5.2-codex, o3, gpt-4.1 | - | codex auth login |
| opencode | glm-4.7, glm-4.6, glm-4.5 | Z.AI plan | opencode auth login |
| qwen | qwen3-coder | 2000 req/day FREE | qwen then /auth |
| openrouter | Various (Llama, Mistral, DeepSeek...) | Some free | OPENROUTER_API_KEY |
Model Selection (override defaults)
--model-claude MODEL # sonnet, opus, haiku (default: claude-sonnet-4)
--model-gemini MODEL # gemini-2.5-pro, gemini-2.5-flash (default: gemini-2.5-pro)
--model-codex MODEL # gpt-5.2-codex, o3, gpt-4.1 (default: gpt-5.2-codex)
--model-opencode MODEL # glm-4.7, glm-4.6, glm-4.5 (default: glm-4.7)
--model-qwen MODEL # qwen3-coder (default: qwen3-coder)
Structured Reasoning (3C Modified)
Use --reasoning-style structured for enhanced output quality:
python3 ${SKILL_ROOT}/scripts/council.py \
--query "Should we use microservices?" \
--reasoning-style structured
What changes with structured mode:
| Aspect | Default | Structured |
|---|---|---|
| Focus | Volume of arguments | Material impact on decision |
| Key points | Listed equally | Ranked by impact (HIGH/MEDIUM/LOW) |
| Confidence | Single number | Breakdown (verified/inferred/speculative) |
| Synthesis | Final answer | Key insight + qualifications + dissent |
Use structured when:
- Decisions have significant consequences
- You need explicit tradeoffs and uncertainty levels
- Audit trail with qualifications is important
Output
JSON with answer, confidence, and optional trail file path:
{
"answer": "Council recommends...",
"confidence": 0.91,
"trail_file": "./council_trails/council_2025-12-31_consensus_query.md"
}
Context Manifest Format
Create a manifest file listing code files to analyze:
# Council Context
## Question
Review auth module for security issues
## Files to Analyze
### src/auth.py
- Main authentication logic
### src/config.py
- JWT configuration
Lines starting with ### filename.ext are loaded automatically.
Reference Documentation
- references/modes.md - Deliberation mode details
- references/security.md - Security features, input validation
- references/resilience.md - Graceful degradation, circuit breaker
- references/failure-modes.md - Error handling and recovery
- references/output-format.md - Response templates
- references/examples.md - Usage examples
Resilience
- Circuit breaker per model (3 failures β OPEN)
- Chairman failover chain: claude β gemini β codex β opencode β qwen
- Adaptive timeout based on response history
- Graceful degradation (min quorum: 2 models)
- Credential rotation for rate limit distribution
Configuration
Persistent settings via council.config.yaml:
# Available: claude, gemini, codex, opencode, qwen
providers:
- claude
- gemini
- codex
# - opencode # GLM-4.7 via Z.AI
# - qwen # 2000 FREE requests/day!
chairman: claude
timeout: 60 # Base timeout, mode-specific overrides apply
# Mode-specific timeouts (seconds)
mode_timeouts:
consensus: 60
vote: 60
debate: 180
devil_advocate: 420
adaptive: 120
max_rounds: 2
convergence_threshold: 0.5
min_quorum: 2
credential_rotation_enabled: true