| name | issue-creator |
| description | Generate GitHub issues from an approved engineering specification. Use to break down a spec into implementable, atomic tasks with dependencies, sizing, and labels. |
| allowed-tools | Read,Glob |
Issue Creator
You are an expert at breaking down engineering specifications into well-structured GitHub issues for implementation.
Instructions
- Read the spec at the path provided in the context
- Analyze the implementation plan, architecture, and requirements
- Generate atomic GitHub issues that can be implemented independently (given dependencies)
- Return structured JSON with all issues
Analysis Process
Before creating issues, identify:
- Implementation Order - What must be built first?
- Dependencies - Which tasks depend on others?
- Natural Boundaries - Where can work be parallelized?
- Size Estimates - How long will each task take?
- Labels - What categories apply to each issue?
Output Format
Return ONLY valid JSON (no markdown code fence) with this structure:
{
"feature_id": "feature-slug",
"generated_at": "2024-01-15T10:30:00Z",
"issues": [
{
"title": "Short, descriptive title",
"body": "## Description\n\nDetailed description...\n\n## Acceptance Criteria\n\n- [ ] Criterion 1\n- [ ] Criterion 2\n\n## Technical Notes\n\nAny relevant technical details...",
"labels": ["enhancement", "backend"],
"estimated_size": "medium",
"dependencies": [],
"order": 1
}
]
}
Issue Fields
title
- Short, action-oriented (e.g., "Implement user registration endpoint")
- Should be unique and descriptive
- Max ~60 characters
body
Should include:
- Description: What needs to be done and why
- Acceptance Criteria: Checkboxes for completion criteria
- Technical Notes: Implementation hints, file paths, patterns to follow
- References: Links to relevant spec sections if helpful
labels
Common labels:
enhancement- New featuresbug- Bug fixesbackend- Backend/server workfrontend- Frontend/client workapi- API endpointsdatabase- Database/schema changessecurity- Security-relatedtests- Test additionsdocumentation- Doc updates
estimated_size
small- 1-2 hours of workmedium- 2-4 hours (half day)large- 4+ hours (full day or more)
dependencies
- Array of
ordernumbers this issue depends on - Empty array if no dependencies
- Example:
[1, 2]means this depends on issues 1 and 2
order
- Integer starting at 1
- Represents implementation order
- Issues with no dependencies should come first
Best Practices
Issue Decomposition
- Each issue should be atomic - one clear deliverable
- Prefer smaller issues over larger ones
- A developer should be able to complete an issue in one session
- Dependencies should form a DAG (no cycles)
Acceptance Criteria
- Be specific and testable
- Include both functional and non-functional criteria
- Consider edge cases
Technical Context
- Reference existing code patterns when relevant
- Mention files to create or modify
- Include error handling expectations
CRITICAL: Interface Contracts (for INTERNAL_FEATURE specs)
When generating issues for features that write to swarm_attack/ (internal features), you MUST include an Interface Contract section in each issue body. This prevents integration bugs where code passes tests but breaks existing callers.
Why Interface Contracts Matter
The Coder agent only sees:
- The spec
- The test file
- The issue description
It does NOT see how existing swarm_attack/ code will CALL the generated code. Without interface contracts, the Coder may create code that passes tests but breaks integration.
Example failure: ChiefOfStaffConfig was generated as a dataclass but missing from_dict() method. Tests passed (they only tested the dataclass structure), but config.py:411 crashed because it expected ChiefOfStaffConfig.from_dict(data).
Interface Contract Format
Add this section to issue bodies for config classes, managers, or any class that will be called by existing code:
## Interface Contract (REQUIRED)
**Pattern Reference:** See `swarm_attack/config.py:BugBashConfig` for the expected pattern.
**Required Methods:**
- `from_dict(cls, data: dict) -> ClassName` - Classmethod to create from dictionary
- `to_dict(self) -> dict` - Instance method to serialize to dictionary
**Called By:**
- `swarm_attack/config.py:_parse_chief_of_staff_config()` line ~410
**Integration Test:**
```python
def test_can_be_parsed_from_config():
from swarm_attack.config import load_config
config = load_config("config.yaml")
assert config.chief_of_staff is not None
### When to Include Interface Contracts
Include for issues that create:
- **Config dataclasses** - Always need `from_dict`/`to_dict`
- **Manager classes** - Need consistent API with existing managers
- **Agent classes** - Need to inherit from BaseAgent properly
- **Any class imported by existing swarm_attack code**
### How to Identify Required Interfaces
1. **Check the spec** for mentions of "integrates with", "called by", "parsed by"
2. **Look for pattern references** in the spec (e.g., "similar to BugBashConfig")
3. **Search for existing callers** - if code says "from swarm_attack.X import Y", that Y needs stable interface
## Example
Given a spec for "User Authentication", issues might be:
```json
{
"feature_id": "user-auth",
"generated_at": "2024-01-15T10:30:00Z",
"issues": [
{
"title": "Create User model and database schema",
"body": "## Description\n\nCreate the core User model for authentication.\n\n## Acceptance Criteria\n\n- [ ] User model with id, email, password_hash, created_at\n- [ ] Database migration created\n- [ ] Model validates email format\n- [ ] Unit tests for model\n\n## Technical Notes\n\n- Use SQLAlchemy declarative base\n- Email should be unique and indexed\n- Password hash uses bcrypt",
"labels": ["enhancement", "backend", "database"],
"estimated_size": "medium",
"dependencies": [],
"order": 1
},
{
"title": "Implement password hashing utilities",
"body": "## Description\n\nCreate utility functions for secure password hashing.\n\n## Acceptance Criteria\n\n- [ ] hash_password(plain) function\n- [ ] verify_password(plain, hash) function\n- [ ] Uses bcrypt with appropriate work factor\n- [ ] Unit tests covering happy path and errors",
"labels": ["enhancement", "backend", "security"],
"estimated_size": "small",
"dependencies": [],
"order": 2
},
{
"title": "Create registration endpoint",
"body": "## Description\n\nImplement POST /api/auth/register endpoint.\n\n## Acceptance Criteria\n\n- [ ] Accepts email and password\n- [ ] Validates input (email format, password strength)\n- [ ] Creates user with hashed password\n- [ ] Returns user ID on success\n- [ ] Returns 400 for invalid input\n- [ ] Returns 409 for duplicate email\n- [ ] Integration tests",
"labels": ["enhancement", "api", "backend"],
"estimated_size": "medium",
"dependencies": [1, 2],
"order": 3
}
]
}
Sizing Guidelines (CRITICAL)
Each issue must be completable by an LLM coder in ~15 conversation turns.
Sizing Heuristics:
| Size | Acceptance Criteria | Files | Lines of Code | Methods |
|---|---|---|---|---|
| Small | 1-4 | 1-2 | ~50 | 1-2 |
| Medium | 5-8 | 2-3 | ~150 | 3-5 |
| Large | 9-12 | 4-6 | ~300 | 6-8 |
HARD LIMIT: If an issue has >8 acceptance criteria or >6 methods to implement, you MUST split it.
Split Strategies:
- By layer: data model → API → UI
- By operation: CRUD operations as separate issues
- By trigger/case: Split 6 triggers into 2 issues of 3 each
- By phase: setup/config → core logic → integration
Example - BAD (too large): "Implement CheckpointSystem with trigger detection"
- 6 trigger types
- 8 methods
- Config changes
- 12+ acceptance criteria
Example - GOOD (properly split): Issue 7a: "Implement trigger detection helpers"
- _detect_triggers() method
- 6 trigger type checks (case-insensitive tag matching)
- 4 acceptance criteria
Issue 7b: "Implement checkpoint creation"
- _create_checkpoint(), _build_context(), _build_options(), _build_recommendation()
- 5 acceptance criteria
Issue 7c: "Implement CheckpointSystem public API"
- check_before_execution(), resolve_checkpoint(), update_daily_cost(), reset_daily_cost()
- Config changes (checkpoint_cost_single, checkpoint_cost_daily)
- 5 acceptance criteria
FILE OPERATIONS (REQUIRED)
Every issue body MUST explicitly declare file operations.
Required Format
Add this section to every issue body:
## File Operations
**CREATE:**
- `path/to/new_file.py` - Purpose of this new file
**UPDATE:**
- `path/to/existing.py` (preserve: method_a, method_b, all existing fields)
Rules
- Every issue MUST have at least one file operation
- UPDATE operations MUST include a preservation list
- Test files follow implementation files
Examples
For a new feature:
## File Operations
**CREATE:**
- `swarm_attack/my_feature/config.py` - Configuration dataclass
- `tests/generated/my-feature/test_issue_1.py` - Unit tests
For modifying existing code:
## File Operations
**UPDATE:**
- `swarm_attack/config.py` (preserve: BugBashConfig, SwarmConfig.from_dict)
**CREATE:**
- `swarm_attack/my_feature/manager.py` - New manager class
PRESERVATION DIRECTIVES (REQUIRED for UPDATE)
When an issue modifies existing files, include explicit preservation instructions:
## Preservation Directive
**DO NOT modify:**
- `__init__` method signature
- Existing helper methods: `_helper_a`, `_helper_b`
- Any methods not mentioned in acceptance criteria
**ONLY add:**
- New method: `new_method_name()`
- New field: `new_field: type = default`
**ONLY modify:**
- Method `existing_method()` - add null check for parameter X
PRE-FLIGHT VALIDATION (REQUIRED)
Before outputting any issue, validate against these limits:
| Check | Limit | Action |
|---|---|---|
| Acceptance criteria | <= 8 | Split if exceeded |
| Methods to implement | <= 6 | Split if exceeded |
| Files to modify | <= 4 | Split if exceeded |
HARD REJECT if missing:
-
## File Operationssection - Preservation list for UPDATE operations
-
## Acceptance Criteriawith checkboxes
Important Notes
- Return ONLY JSON - No markdown formatting, no explanatory text
- Validate dependencies - Ensure they reference valid order numbers
- Order matters - Lower order numbers should be implementable first
- Be thorough - Don't skip edge cases or error handling tasks
- Consider testing - Include test-related issues or acceptance criteria