| name | signalmark-saas-development |
| description | Use when developing features for the Signalmark competitive intelligence SaaS platform |
Signalmark SaaS Development Workflow
This skill provides the complete development workflow for the Signalmark project, integrating superpowers skills with project-specific requirements.
When to Use
Use this skill for ANY feature development, bug fix, or enhancement work on Signalmark.
Project Context
What is Signalmark?
- AI-powered competitive intelligence platform for B2B SaaS companies
- Monitors competitor pricing, jobs, changelogs, messaging
- Transforms raw competitive data into strategic recommendations
Tech Stack:
- Backend: Python 3.12 + FastAPI + PostgreSQL + pgvector + Celery
- Frontend: React 19 + TypeScript + Vite + Mantine
- Crawler: Crawl4AI, Firecrawl, Playwright
- AI: OpenAI GPT-4o, text-embedding-3-small
The Signalmark Development Workflow
Phase 1: Design & Planning
Before writing ANY code:
Read Project Documentation (MANDATORY)
- Read:
CLAUDE.md- Comprehensive project guide - Read:
context/INVARIANTS.md- Non-negotiable system rules - Check:
context/ARCHITECTURE.mdfor system design patterns
- Read:
Use Brainstorming Skill (if feature is new/complex)
Use skill: superpowers:brainstorming- Refine requirements through Socratic questions
- Explore alternatives
- Present design in reviewable sections
- Save design document to
docs/features/[feature-name].md
Create Implementation Plan
Use skill: superpowers:writing-plans- Break into bite-sized tasks (2-5 minutes each)
- Follow Signalmark patterns from CLAUDE.md
- Include file paths, verification steps
- Reference database schema from CLAUDE.md § 5
- Save plan to
docs/plans/[feature-name]-plan.md
Phase 2: Setup Development Environment
Create isolated workspace:
Use skill: superpowers:using-git-worktrees
- Branch name:
feature/[feature-name]orfix/[bug-name] - Verify Docker containers are running
- Run baseline tests:
docker exec signalmark-backend-dev pytest tests/unit/ -v
Phase 3: Implementation
Choose execution method based on complexity:
Option A: Subagent-Driven Development (Recommended)
For independent, parallelizable tasks:
Use skill: superpowers:subagent-driven-development
Specialized agents available:
- Orchestrator (Opus) - Master task coordination and delegation
- Architect Agent (Opus) - For system design decisions
- Planner Agent (Opus) - For strategic planning and task decomposition
- Builder Agent (Sonnet) - For implementation
- Code Reviewer Agent (Sonnet) - For code quality
- Testing Agent (Sonnet) - For test generation
- Debugger Agent (Sonnet) - For systematic debugging
- Documenter Agent (Sonnet) - For documentation
- Security Agent (Sonnet) - For security audits
- Performance Agent (Sonnet) - For performance optimization
Agent configuration:
- Planning model (Orchestrator, Architect, Planner):
claude-opus-4-5-20251101(Opus 4) - Implementation model (all others):
claude-sonnet-4-5-20250929(Sonnet 3.7)
Option B: Executing Plans (For Sequential Work)
For tightly coupled tasks:
Use skill: superpowers:executing-plans
- Execute in batches of 3 tasks
- Review checkpoints between batches
Option C: Parallel Agents (For Independent Problems)
For multiple unrelated bugs/features:
Use skill: superpowers:dispatching-parallel-agents
- Dispatch one agent per independent domain
- Integrate results after completion
Phase 4: Testing (MANDATORY)
Follow Test-Driven Development:
Use skill: superpowers:test-driven-development
Signalmark testing requirements:
- Unit tests:
tests/unit/- Database models, services, repositories - API tests:
tests/unit/api/- Endpoint behavior - Integration tests:
tests/integration/- Full workflows - E2E tests:
tests/e2e/- Browser automation with Playwright
Run tests:
# All tests
docker exec signalmark-backend-dev pytest tests/ -v
# Specific category
docker exec signalmark-backend-dev pytest tests/unit/api/ -v
# With coverage
docker exec signalmark-backend-dev pytest tests/ --cov=app --cov-report=term-missing
Test standards:
- Transaction rollback for isolation (NOT table drops)
- Fixtures in
tests/unit/api/conftest.py - Follow naming:
test_<what>_<expected_behavior>
Phase 5: Code Review
Self-review first:
Use skill: superpowers:requesting-code-review
Automated review:
- Trigger Code Reviewer agent for quality checks
- Check against plan compliance
Manual review checklist:
- ✅ Follows Evidence-First architecture (all AI outputs reference evidence_ids)
- ✅ Multi-tenant isolation (all queries filter by team_id)
- ✅ Enums used (no magic strings)
- ✅ Repository pattern (all DB operations through repos)
- ✅ Tests pass with transaction rollback
- ✅ Follows YAGNI principle
- ✅ Error handling appropriate
- ✅ Logging added for debugging
Phase 6: Verification
Before marking complete:
Use skill: superpowers:verification-before-completion
Verification checklist:
- All tests passing (unit + integration)
- No new linting errors
- Database migrations created (if schema changed)
- Documentation updated (if API/behavior changed)
- Environment variables documented (if new configs)
- CLAUDE.md updated (if architecture changed)
Phase 7: Completion
Finish development branch:
Use skill: superpowers:finishing-a-development-branch
- Run full test suite
- Verify Docker services still running
- Present options: merge/PR/keep/discard
- Clean up worktree
Signalmark-Specific Patterns
Evidence-First Architecture (CRITICAL)
EVERY AI-generated insight MUST reference source evidence:
# ✅ Correct - Traceable
signal = Signal(
title="Price increase detected",
evidence_ids=["ev_ABC123", "ev_DEF456"], # REQUIRED
ai_reasoning="Based on evidence comparison..."
)
# ❌ Wrong - Not traceable
signal = Signal(
title="Price increase detected",
evidence_ids=[], # MISSING - BUG!
)
Multi-Tenant Isolation (CRITICAL)
ALL queries MUST filter by team_id:
# ✅ Correct
signals = db.query(Signal).filter(Signal.team_id == team_id).all()
# ❌ Wrong - Data leakage!
signals = db.query(Signal).all()
Database Operations
Use repository pattern:
from app.db.repositories.evidence import EvidenceRepository
# ✅ Correct
evidence_repo = EvidenceRepository(db_session)
evidence = evidence_repo.create(team_id=team_id, ...)
# ❌ Wrong - Direct model access
evidence = EvidenceBlock(...)
db_session.add(evidence)
Testing Pattern
class TestFeatureName:
"""Test group description."""
def test_specific_behavior(self, db_session, test_team):
"""Test specific expected behavior."""
# Arrange
model = SomeModel(team_id=test_team.id, ...)
db_session.add(model)
db_session.flush()
# Act
result = function_under_test(model.id)
# Assert
assert result == expected
API Endpoint Pattern
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app.api.deps import get_db, get_current_user
from app.db.models import User
router = APIRouter()
@router.get("/endpoint")
def endpoint_name(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user)
):
# Always filter by team_id
team_id = current_user.team_id
# ... implementation
MCP Servers Available
Use these MCP servers for enhanced capabilities:
- postgres - Database queries and migrations
- git - Version control operations
- github - PR management, issues
- playwright - Browser automation for E2E tests
- context7 - Library documentation (FastAPI, SQLAlchemy, React)
- memory - Persistent context across sessions
- fetch - API testing
Common Pitfalls
❌ Don't:
- Skip reading CLAUDE.md before starting
- Ignore INVARIANTS.md rules
- Create signals without evidence_ids
- Query database without team_id filter
- Use magic strings instead of enums
- Skip tests ("I'll add them later")
- Commit code before tests pass
- Forget transaction rollback in tests
✅ Do:
- Read CLAUDE.md § relevant section first
- Follow Evidence-First architecture
- Use repository pattern for all DB ops
- Write tests BEFORE implementation (TDD)
- Check multi-tenant isolation
- Use enums from
app.db.models.enums - Commit after each passing test
- Update documentation when behavior changes
Quick Reference Commands
# Start development
cd autonomous-agent
python autonomous_agent_demo.py --project-dir ./signalmark
# Run tests
docker exec signalmark-backend-dev pytest tests/unit/ -v
# Check logs
docker logs -f signalmark-backend-dev
docker logs -f signalmark-celery-worker
# Database migrations
docker exec signalmark-backend-dev alembic revision --autogenerate -m "Description"
docker exec signalmark-backend-dev alembic upgrade head
# Access database
docker exec -it signalmark-db psql -U signalmark_user -d signalmark
Documentation References
- CLAUDE.md - Complete project guide (66KB, 20 sections)
- context/INVARIANTS.md - System rules (CRITICAL)
- context/ARCHITECTURE.md - System design
- BACKEND_TEST_PLAN.md - Testing strategy
- AGENT_CONFIGURATION.md - Agent setup
- QUICK_REFERENCE.md - Daily commands
Success Criteria
Before marking ANY task complete:
- ✅ Tests passing (unit + integration)
- ✅ Code review passed
- ✅ Evidence-First compliance (if AI involved)
- ✅ Multi-tenant isolation verified
- ✅ Documentation updated
- ✅ Follows Signalmark patterns
- ✅ YAGNI principle applied
- ✅ Committed with meaningful message
Integration with Superpowers
This skill REQUIRES these superpowers skills:
- superpowers:brainstorming - Design refinement
- superpowers:writing-plans - Implementation planning
- superpowers:subagent-driven-development - Parallel execution
- superpowers:test-driven-development - RED-GREEN-REFACTOR
- superpowers:requesting-code-review - Quality gates
- superpowers:finishing-a-development-branch - Completion workflow
Model Selection for Tasks
Orchestrator & Planning (Opus)
The orchestrator and all planning tasks use Claude Opus for superior strategic reasoning:
- Orchestrator:
claude-opus-4-5-20251101- Master task coordination - Architecture & Design:
claude-opus-4-5-20251101- System design decisions - Planning:
claude-opus-4-5-20251101- Task decomposition and strategy - Complex Analysis:
claude-opus-4-5-20251101- Multi-factor decisions
Implementation (Sonnet)
All implementation tasks use Claude Sonnet for efficient execution:
- Implementation:
claude-sonnet-4-5-20250929- Feature building - Code Review:
claude-sonnet-4-5-20250929- Quality checks - Testing:
claude-sonnet-4-5-20250929- Test generation - Documentation:
claude-sonnet-4-5-20250929- Technical writing - Debugging:
claude-sonnet-4-5-20250929- Bug fixes
Simple Tasks (Haiku)
Lightweight tasks use Claude Haiku for cost efficiency:
- Simple fixes:
claude-3-5-haiku-20241022- Minor edits - Formatting:
claude-3-5-haiku-20241022- Code cleanup
Agent Model Assignments
| Agent Type | Task Category | Model | Model ID |
|---|---|---|---|
| Orchestrator | Planning | Opus | claude-opus-4-5-20251101 |
| Architect | Planning | Opus | claude-opus-4-5-20251101 |
| Planner | Planning | Opus | claude-opus-4-5-20251101 |
| Builder | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
| Code Reviewer | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
| Tester | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
| Debugger | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
| Documenter | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
| Security | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
| Performance | Implementation | Sonnet | claude-sonnet-4-5-20250929 |
Rationale: Opus excels at strategic planning and complex reasoning. Sonnet provides optimal speed/quality for implementation tasks.
Remember: This is a SaaS platform with real users and real data. Follow Evidence-First architecture, ensure multi-tenant isolation, and write tests FIRST. Quality > Speed.