| name | cto-audit |
| description | Perform deep, expert-level codebase and architecture audits to identify technical strengths, weaknesses, risks, and opportunities. Use when a user asks for an assessment of a codebase's structure, quality, or readiness for scale. Deliver detailed, actionable, and prioritized recommendations grounded in engineering best practices. |
CTO Audit
Overview
Produce a rigorous, CTO-level audit of a software system, focusing on architecture, implementation quality, performance, security, and operational readiness. Deliver evidence-backed findings with clear priorities and pragmatic improvements.
Audit Workflow
Clarify scope and constraints
- Identify repositories, services, environments, and time constraints.
- Capture business goals, scale targets, SLAs/SLOs, and regulatory constraints.
- Ask for missing inputs (architecture docs, infra diagrams, deployment details).
Map the system
- Inventory components: services, data stores, queues, third-party dependencies.
- Trace critical flows: request path, data lifecycle, and failure modes.
- Identify system boundaries and ownership.
Inspect architecture and design
- Evaluate modularity, separation of concerns, and dependency direction.
- Identify overengineering, premature abstraction, or tight coupling.
Assess implementation quality
- Verify correctness in critical paths; check error handling and edge cases.
- Identify incomplete features, TODOs, stubs, or silent failure points.
- Note code smells that degrade maintainability.
Evaluate performance and efficiency
- Identify hot paths, heavy I/O, N+1 patterns, and memory pressure.
- Assess scalability limits (CPU, database, cache, network).
- Recommend targeted optimizations with expected impact.
Review security and reliability
- Check input validation, authn/authz, secrets handling, and data exposure.
- Evaluate resiliency: retries, timeouts, backoff, circuit breaking, idempotency.
- Note incident response readiness and observability coverage.
Review tooling, testing, and ops
- Inspect test coverage and critical gap areas.
- Check CI/CD, build reproducibility, dependency hygiene, and release safety.
- Assess runtime monitoring, alerting, and runbooks.
Synthesize and prioritize
- Group findings by severity and theme.
- Highlight what works well and why.
- Produce a prioritized roadmap of fixes and upgrades.
Evaluation Dimensions
- Architecture and system design
- Implementation quality and correctness
- Performance and efficiency
- Code quality and maintainability
- Security and reliability
- Tooling, testing, and DevOps
- Elegance and simplicity
Evidence Standards
- Cite concrete evidence: file paths, line numbers, configs, or logs.
- Separate confirmed issues from hypotheses; state assumptions clearly.
- Avoid ungrounded claims; request missing data when needed.
Output Format
Summary:
- State purpose and scope.
- Summarize the system overview (1-2 sentences).
- List key strengths (2-4 bullets).
- List top risks (2-4 bullets).
Findings (ordered by severity):
- Use severity buckets: Critical, High, Medium, Low.
For each finding, include impact, evidence, and a crisp fix direction.
Recommendations (prioritized):
- List immediate fixes (safety and correctness).
- List near-term improvements (quality, performance, maintainability).
- List strategic investments (architecture, platform, scale).
Overall Assessment:
- State readiness for scale and operational risk.
- Provide a short rationale and confidence level.