| name | doc-query |
| description | Targeted query capabilities for machine-readable codebase documentation with cross-reference tracking, call graph analysis, and workflow automation. Enables fast lookups of classes, functions, dependencies, and function relationships without parsing source code. |
doc-query: Codebase Documentation Query System
Description
The Skill(sdd-toolkit:doc-query) skill provides targeted query capabilities for machine-readable codebase documentation generated by the Skill(sdd-toolkit:code-doc) skill. It enables fast, structured lookups of classes, functions, modules, dependencies, and complexity metrics without parsing source code directly, plus advanced cross-reference tracking and call graph analysis.
Key Features
- Entity Lookup: Find classes, functions, and modules by exact name or regex pattern
- Cross-Reference Tracking: Find callers/callees, build call graphs, track class instantiations and imports
- Call Graph Analysis: Visualize function call relationships with configurable depth and direction
- Module Summaries:
describe-modulesurfaces docstrings, hot spots, dependencies, and key entities in one call - JSON Output Available: Use
--jsonor--format jsonon commands for structured output ready forjq, scripts, or downstream tools - Complexity Analysis: Identify refactoring candidates with configurable complexity thresholds
- Dependency Mapping: Understand module relationships and perform impact analysis
- Context Gathering: Smart context collection for specific tasks or feature areas
- Workflow Automation: High-level commands that combine multiple queries into single operations
Quick Start
New to doc-query? Start with these automated workflow commands that handle the most common use cases:
# Understand how a feature works end-to-end
sdd doc trace-entry <function_name>
# See what breaks if you change a function
sdd doc impact <function_name>
# Find high-priority refactoring candidates
sdd doc refactor-candidates
# Track how data flows through your system
sdd doc trace-data <ClassName>
These commands combine 6-8 manual steps into single operations with intelligent analysis and risk assessment.
For specific lookups, use basic commands:
sdd doc find-function <name>- Locate a functionsdd doc callers <function>- See who calls this functionsdd doc call-graph <function>- Visualize call relationshipssdd doc dependencies <module> --reverse- Impact analysis
See below for complete command reference and advanced usage patterns.
When to Use This Skill
✅ Use Skill(sdd-toolkit:doc-query) when:
- Starting a new task - Quickly find relevant classes, functions, and modules
- Bug fixing - Locate specific functions and understand their dependencies
- Feature implementation - Find similar existing implementations to follow patterns
- Refactoring - Identify high-complexity functions that need attention
- Impact analysis - Understand what modules will be affected by changes
- Test planning - Find test files and estimate coverage
- Code exploration - Navigate and understand codebase structure with module-level summaries
❌ Don't use Skill(sdd-toolkit:doc-query) when:
- Documentation hasn't been generated yet (run
Skill(sdd-toolkit:code-doc)skill first) - You need to read actual source code (use
ExploreorReadtool instead) - You need to analyze runtime behavior (use debugging tools)
Note: Documentation staleness is automatically detected and docs are auto-regenerated by default if out of date. Use --skip-refresh for faster queries without regeneration or --no-staleness-check to skip staleness detection entirely.
Exploration Workflows Quick Reference
Use these workflows to systematically explore any codebase. All workflows are codebase-agnostic and work across languages, frameworks, and architectures.
| Workflow | Automated Command | Manual Alternative | When to Use It |
|---|---|---|---|
| TRACE-ENTRY-POINT | trace-entry <function> |
6-step pattern | "How does [action] work?" |
| TRACE-DATA-OBJECT | trace-data <class> |
6-step pattern | "What happens to [entity]?" |
| IMPACT-ANALYSIS | impact <entity> |
7-step pattern | "What breaks if I modify X?" |
| REFACTOR-PRIORITY | refactor-candidates |
Manual complexity analysis | "What should I refactor first?" |
| EXPLORE-FEATURE-AREA | Use context + manual queries |
5-step pattern | "Tell me about the [feature] system" |
| FIND-PATTERN | Manual queries only | 6-step pattern | "How do we do [validation/auth/caching]?" |
| ONBOARD-TO-CODEBASE | Manual queries only | 6-step pattern | "I'm new here, where do I start?" |
| TRACE-ERROR-FLOW | Manual queries only | 5-step pattern | "How are errors handled?" |
| TRACE-CONFIGURATION | Manual queries only | 5-step pattern | "Where is [config/flag] used?" |
| TRACE-TEST-COVERAGE | Manual queries only | 5-step pattern | "What tests cover [feature]?" |
Note: Workflows with automated commands (top 4) reduce 6-7 manual steps to 1 command. Others require manual query composition using basic commands.
Decision Tree: Which Workflow Should I Use?
START: What do you want to know?
│
├─ "How does [action/request/event] work?"
│ └─ sdd doc trace-entry <function> [AUTOMATED]
│
├─ "What happens to [data/entity]?"
│ └─ sdd doc trace-data <class> [AUTOMATED]
│
├─ "What breaks if I change [X]?"
│ └─ sdd doc impact <entity> [AUTOMATED]
│
├─ "What should I refactor?"
│ └─ sdd doc refactor-candidates [AUTOMATED]
│
├─ "Tell me about [feature/module/system]"
│ └─ sdd doc context + manual queries [MANUAL]
│
├─ "How do we do [pattern] here?" (e.g., validation, auth, caching)
│ └─ Manual query pattern (see below) [MANUAL]
│
├─ "I'm new here, where do I start?"
│ └─ Manual onboarding pattern (see below) [MANUAL]
│
├─ "How are errors handled?"
│ └─ Manual error tracing pattern (see below) [MANUAL]
│
├─ "Where is [config/flag] used?"
│ └─ Manual config tracing pattern (see below) [MANUAL]
│
└─ "What tests cover [feature]?"
└─ Manual test coverage pattern (see below) [MANUAL]
Workflow Tiers
Tier 1: Automated Workflows (One-command solutions for common tasks)
trace-entry- Understand execution flow end-to-endtrace-data- Follow data lifecycle through systemimpact- Assess blast radius of changesrefactor-candidates- Identify technical debt priorities
Tier 2: Manual Query Patterns (Advanced usage for specialized needs)
- EXPLORE-FEATURE-AREA - Comprehensive feature discovery
- FIND-PATTERN - Implementation pattern analysis
- ONBOARD-TO-CODEBASE - Systematic codebase orientation
- TRACE-ERROR-FLOW - Error handling investigation
- TRACE-CONFIGURATION - Configuration tracking
- TRACE-TEST-COVERAGE - Test strategy understanding
Tool Verification
Before using this skill, verify the required tools are available:
# Verify sdd doc CLI is installed and accessible
sdd doc --help
Expected output: Help text showing available commands (stats, search, find-class, describe-module, etc.)
IMPORTANT - CLI Usage Only:
- ✅ DO: Use
sdd docCLI wrapper commands (e.g.,sdd doc stats,sdd doc search,sdd doc find-class) - ❌ DO NOT: Execute Python scripts directly (e.g.,
python doc_query.py,bash python cli.py)
The CLI provides proper error handling, validation, argument parsing, and interface consistency. Direct script execution bypasses these safeguards and may fail.
If the verification command fails, ensure the SDD toolkit is properly installed and accessible in your environment.
Requirements
- Documentation must be generated by
Skill(sdd-toolkit:code-doc)skill - Documentation files expected in
docs/directory:documentation.json(required)AI_CONTEXT.md(optional, for quick reference)ARCHITECTURE.md(optional, for system design)DOCUMENTATION.md(optional, for human-readable reference)
Note: You should NOT read the documentation.json or DOCUMENTATION.md documents manually.
Auto-Detection
sdd doc CLI automatically searches for documentation in multiple locations (in order of priority):
- Current directory:
./docs/ - Parent directory:
../docs/ - Alternative naming:
./documentation/ - Claude home:
~/.claude/docs/
No --docs-path needed for most cases! The tool will find your documentation automatically.
Explicit path override: Use --docs-path PATH to specify a custom location:
sdd doc stats --docs-path /path/to/project/docs
Check detection: The stats command shows which path was detected:
sdd doc stats
# Output includes: "Found documentation at: /path/to/docs"
Documentation Staleness Detection
NEW: doc-query now automatically detects and regenerates stale documentation by default!
How It Works
Every query command automatically checks if source files have been modified since documentation was generated:
- Compares timestamps: Documentation generation time vs. latest source file modification
- Auto-regenerates if stale: Automatically regenerates documentation before running the query
- Shows progress: Displays regeneration status and completion
- Performance: Staleness check adds ~10-50ms; regeneration takes 30-60s when needed
Default Behavior
$ sdd doc find-function calculate_score
🔄 Documentation is stale, regenerating...
✅ Documentation regenerated successfully
Found 1 result(s):
...
Flags to Control Behavior
--skip-refresh: Skip Auto-Regeneration
Skips auto-regeneration even if docs are stale, showing only a warning:
$ sdd doc find-function calculate_score --skip-refresh
⚠️ Documentation is stale (generated 3 days ago, source modified 2 hours after generation)
To auto-refresh: remove --skip-refresh flag or run 'sdd doc generate'
To suppress this warning: use --no-staleness-check
Found 1 result(s):
...
When to use:
- Speed is critical
- You're running many queries in succession
- You know docs are recent enough for your needs
- Exploratory queries where perfect accuracy isn't required
--no-staleness-check: Skip Check Entirely
Disables staleness detection completely for maximum speed:
$ sdd doc find-function calculate_score --no-staleness-check
Found 1 result(s):
...
When to use:
- You know docs are fresh
- You're working in CI/CD where docs were just generated
- Performance-critical automated workflows
- When staleness doesn't matter for your use case
Examples
Workflow 1: Default behavior (recommended)
# Automatically regenerates if needed - guaranteed fresh results
sdd doc impact UserService
Workflow 2: Fast exploration
# Skip regeneration for quick lookups
sdd doc find-class User --skip-refresh
sdd doc describe-module auth.py --skip-refresh
Workflow 3: Maximum performance
# Skip staleness check entirely
sdd doc search "validation" --no-staleness-check
Automated Workflow Commands
These commands automate common workflows by combining multiple queries into single, purpose-built commands. Use these first for the fastest results.
1. Trace Entry Point
Trace execution flow from an entry function, showing the complete call chain with architectural layers and complexity analysis.
sdd doc trace-entry <function> [--max-depth N] [--format text|json] [--docs-path PATH]
Options:
--max-depth N- Maximum call chain depth (default: 5)--format- Output format: text or json (default: text)
Examples:
# Trace execution flow from main
sdd doc trace-entry main
# Trace with custom depth
sdd doc trace-entry process_request --max-depth 3
# JSON output for programmatic use
sdd doc trace-entry main --format json
Output includes:
- Complete call chain tree visualization
- Architectural layer classification (Presentation, Business Logic, Data, etc.)
- Complexity scores for each function
- Hot spot identification (high complexity or high fan-out)
- Summary statistics
When to use:
- Understanding how a feature works end-to-end
- Finding performance bottlenecks in execution paths
- Identifying complex call chains that need refactoring
- Documenting system flows
2. Trace Data Lifecycle
Trace how a data object (class) flows through the codebase, showing CRUD operations and usage patterns.
sdd doc trace-data <classname> [--include-properties] [--format text|json] [--docs-path PATH]
Options:
--include-properties- Include detailed property access analysis--format- Output format: text or json (default: text)
Examples:
# Trace User class lifecycle
sdd doc trace-data User
# Include property access patterns
sdd doc trace-data User --include-properties
# JSON output
sdd doc trace-data DocumentationQuery --format json
Output includes:
- CREATE operations (where instances are created)
- READ operations (functions that access the data)
- UPDATE operations (functions that modify the data)
- DELETE operations (where data is destroyed)
- Usage map organized by architectural layer
- Property access analysis (when --include-properties is used)
When to use:
- Understanding data flow through the system
- Finding all places where data is modified
- Identifying mutation hot spots
- Planning data model refactoring
3. Impact Analysis
Analyze the impact of changing a function or class, calculating the blast radius with risk assessment.
sdd doc impact <entity> [--depth N] [--format text|json] [--docs-path PATH]
Options:
--depth N- Maximum depth for indirect dependency traversal (default: 2)--format- Output format: text or json (default: text)
Examples:
# Analyze impact of changing a function
sdd doc impact calculate_score
# Deep analysis with 3 levels
sdd doc impact UserService --depth 3
# JSON output
sdd doc impact main --format json
Output includes:
- Direct dependents (functions/classes that directly use this entity)
- Indirect dependents (2nd+ degree dependencies)
- Test coverage estimation
- Risk score and level (high/medium/low)
- Actionable recommendations based on risk level
- Layer-by-layer impact breakdown
When to use:
- Pre-refactoring risk assessment
- Understanding blast radius before changes
- Planning safe refactoring strategies
- Identifying coordination needs for changes
4. Refactor Candidates
Find high-priority refactoring candidates by combining complexity metrics with usage data.
sdd doc refactor-candidates [--min-complexity N] [--limit N] [--format text|json] [--docs-path PATH]
Options:
--min-complexity N- Minimum complexity threshold (default: 10)--limit N- Maximum number of candidates to return (default: 20)--format- Output format: text or json (default: text)
Examples:
# Find refactoring candidates
sdd doc refactor-candidates
# Focus on high-complexity functions
sdd doc refactor-candidates --min-complexity 20 --limit 10
# JSON output for tooling integration
sdd doc refactor-candidates --format json
Output includes:
- Prioritized list sorted by priority score (complexity × dependents)
- Risk level categorization (high/medium/low)
- Quick wins (high complexity, low dependents - safe to refactor)
- Major refactors (high complexity, high dependents - need planning)
- Actionable recommendations for each candidate
- Summary statistics and risk distribution
When to use:
- Planning technical debt reduction
- Prioritizing refactoring work
- Identifying quick wins vs major efforts
- Code quality improvement initiatives
Basic Query Commands
These commands provide targeted lookups for specific entities and relationships. Combine them to build custom workflows when automated commands don't fit your needs.
1. Find Class
Find a specific class by exact name or regex pattern.
sdd doc find-class <name> [--pattern] [--docs-path PATH]
Examples:
# Find exact class
sdd doc find-class WizardSession
# Find classes matching pattern
sdd doc find-class ".*Session.*" --pattern
When to use:
- Starting work on a feature involving a specific class
- Understanding inheritance hierarchies
- Finding class implementation location
2. Find Function
Find a specific function by exact name or regex pattern.
sdd doc find-function <name> [--pattern] [--docs-path PATH]
Examples:
# Find exact function
sdd doc find-function calculate_score
# Find functions matching pattern
sdd doc find-function ".*score.*" --pattern
When to use:
- Bug fixing in a specific function
- Understanding function complexity and parameters
- Finding function implementation location
3. Describe Module
Produce a rich summary for a specific module, including docstring, key classes/functions, dependencies, and complexity signals.
sdd doc describe-module <module> [--top-functions N] [--include-docstrings] [--skip-dependencies] [--docs-path PATH]
Examples:
# Quick overview with defaults
sdd doc describe-module app/services/scoring.py
# Focus on the top 3 complex functions and include docstring snippets
sdd doc describe-module app/services/scoring.py --top-functions 3 --include-docstrings
# Export summary as JSON for downstream tooling
sdd doc describe-module scoring.py --json
When to use:
- Evaluating an unfamiliar file before editing
- Sharing a concise module summary with teammates
- Feeding structured module data into other tooling via
--json - Spotting complexity hot-spots without scanning entire documentation
4. Find Module
Find a module by name or pattern.
sdd doc find-module <name> [--pattern] [--docs-path PATH]
Examples:
# Find exact module
sdd doc find-module app/services/scoring.py
# Find modules matching pattern
sdd doc find-module ".*scoring.*" --pattern
When to use:
- Understanding module structure
- Finding all entities in a module
- Checking module dependencies
- Jumping into module descriptions via
describe-module
5. Complexity Analysis
List functions above a complexity threshold.
sdd doc complexity [--threshold N] [--module M] [--docs-path PATH]
Examples:
# Find all functions with complexity >= 5
sdd doc complexity
# Find high-complexity functions (>= 8)
sdd doc complexity --threshold 8
# Find complex functions in a specific module
sdd doc complexity --module scoring.py
When to use:
- Identifying refactoring candidates
- Code quality assessment
- Planning technical debt reduction
5. Dependencies
Show module dependencies (direct or reverse).
sdd doc dependencies <module> [--reverse] [--docs-path PATH]
Examples:
# Show what a module imports
sdd doc dependencies app/services/scoring.py
# Show what imports this module (reverse dependencies)
sdd doc dependencies app/models/session.py --reverse
When to use:
- Impact analysis before changes
- Understanding module relationships
- Identifying circular dependencies
- Planning refactoring
Understanding Reverse Dependencies
How it works:
- Forward dependencies: Shows what a module imports (from its import statements)
- Reverse dependencies: Shows what modules import this module (who depends on it)
Important: Import Names vs File Paths
The dependency system tracks import strings as they appear in code, not normalized file paths.
✅ Forward dependencies work with file paths:
# This works - shows what this file imports
sdd doc dependencies src/myapp/services/auth.py
⚠️ Reverse dependencies require import names:
# ✅ CORRECT - Use the import name
sdd doc dependencies "myapp.services.auth" --reverse
sdd doc dependencies "auth" --reverse # May work for short names
# ❌ INCORRECT - File path won't match import strings
sdd doc dependencies src/myapp/services/auth.py --reverse
# Returns: No results (even if modules import this)
Why the difference?
When Python code imports a module:
from myapp.services.auth import login # Import string: "myapp.services.auth"
import myapp.services.auth # Import string: "myapp.services.auth"
The dependency tracker stores "myapp.services.auth" (the import string), not "src/myapp/services/auth.py" (the file path).
Finding the correct import name:
If you're not sure of the import name, use forward dependencies first:
# 1. Check what imports this module (look at the output)
sdd doc dependencies src/myapp/services/auth.py
# 2. Look for project-internal imports (not stdlib)
# Output might show: "myapp.models", "myapp.config", etc.
# 3. Use similar patterns for reverse lookups
sdd doc dependencies "myapp.services.auth" --reverse
Practical workflow for impact analysis:
# Step 1: Find the module you want to analyze
sdd doc find-module "auth" --pattern
# Step 2: Check forward deps (what it uses)
sdd doc dependencies src/myapp/services/auth.py
# Step 3: Infer import name from file structure
# File: src/myapp/services/auth.py
# Likely import: myapp.services.auth
# Step 4: Check reverse deps (who uses it)
sdd doc dependencies "myapp.services.auth" --reverse
# Step 5: Analyze the blast radius
# Combine results to understand full impact
Edge cases:
- Standard library imports (e.g.,
argparse,json): These will show reverse dependencies for all modules that import them - Re-exported modules (e.g.,
__init__.py): These may not show direct imports if other modules import from the parent package - Relative imports (e.g.,
from . import foo): Stored as relative strings, may need exact match
6. Callers
Show functions that call the specified function using cross-reference data from AST analysis.
sdd doc callers <function> [--format text|json] [--docs-path PATH]
Examples:
# Find all functions that call calculate_score
sdd doc callers calculate_score
# JSON output for programmatic use
sdd doc callers process_data --format json
Output includes:
- Function name and location of each caller
- Line number where the call occurs
- Call type (function_call, method_call, etc.)
- File paths for easy navigation
When to use:
- Understanding function usage patterns
- Impact analysis before refactoring
- Finding entry points to a subsystem
- Identifying who depends on this function
7. Callees
Show functions called by the specified function using cross-reference data from AST analysis.
sdd doc callees <function> [--format text|json] [--docs-path PATH]
Examples:
# Find all functions called by main
sdd doc callees main
# JSON output for programmatic use
sdd doc callees process_request --format json
Output includes:
- Function name and location of each callee
- Line number where the call occurs
- Call type (function_call, method_call, class_instantiation)
- File paths for easy navigation
When to use:
- Understanding function implementation scope
- Tracing execution paths from a function
- Identifying dependencies of a function
- Planning refactoring boundaries
8. Call Graph
Build and visualize function call graphs with configurable depth and direction.
sdd doc call-graph <function> [--depth N] [--direction up|down|both] [--format text|json|dot] [--docs-path PATH]
Options:
--depth N- Maximum graph depth (default: 3)--direction- Graph direction:down: Show callees (functions this calls) - defaultup: Show callers (functions that call this)both: Show both callers and callees
--format- Output format:text: Human-readable tree (default)json: Structured data for toolingdot: Graphviz format for visualization
Examples:
# Show call graph for a function (what it calls)
sdd doc call-graph process_request
# Show upstream callers (who calls this)
sdd doc call-graph calculate_score --direction up --depth 2
# Show bidirectional graph
sdd doc call-graph main --direction both --depth 3
# Generate Graphviz visualization
sdd doc call-graph main --direction both --format dot > callgraph.dot
dot -Tpng callgraph.dot -o callgraph.png
Output includes:
- Tree or graph visualization of call relationships
- Depth indicators showing call chain levels
- Cycle detection warnings
- Node count and relationship statistics
When to use:
- Visualizing complex call relationships
- Understanding execution flow across multiple layers
- Creating architecture documentation
- Planning refactoring boundaries
- Identifying circular dependencies
- Generating call graphs for documentation
9. Search
Search across all entities (classes, functions, modules).
sdd doc search <query> [--limit N] [--docs-path PATH]
Examples:
# Search for anything related to authentication
sdd doc search "auth"
# Search for scoring-related entities
sdd doc search "score.*"
# Limit results to first 10 matches
sdd doc search "CLI" --limit 10
When to use:
- Exploratory searches
- Finding all related entities
- Broad context gathering
- Use
--limitto control output volume for broad searches
10. Context
Gather comprehensive context for a feature area.
sdd doc context <area> [--docs-path PATH]
Examples:
# Get all entities related to wizard functionality
sdd doc context "wizard"
# Get all scoring-related context
sdd doc context "scoring"
When to use:
- Starting work on a feature area
- Understanding feature scope
- Gathering context for SDD tasks
11. Statistics
Show documentation statistics and metrics.
sdd doc stats [--docs-path PATH]
When to use:
- Quick codebase overview
- Assessing code quality
- Checking documentation freshness
12. List Entities
List all classes, functions, or modules.
sdd doc list-classes [--module M] [--docs-path PATH]
sdd doc list-functions [--module M] [--docs-path PATH]
sdd doc list-modules [--docs-path PATH]
Examples:
# List all classes
sdd doc list-classes
# List functions in a specific module
sdd doc list-functions --module scoring.py
# List all modules
sdd doc list-modules
When to use:
- Getting an overview of entities
- Browsing codebase structure
- Verifying documentation completeness
Advanced: Manual Workflow Patterns
For specialized needs or custom analysis, you can manually combine basic query commands. The automated workflows (above) handle 90% of use cases, but these patterns provide fine-grained control when needed.
When to Use Manual Patterns
These workflows provide systematic approaches to understanding any codebase. All patterns use generic placeholders like [feature], [entity], [pattern] - substitute with your domain-specific terms.
Workflow 1: TRACE-ENTRY-POINT
Goal: Understand the end-to-end flow of a user action, API request, or system event.
Use cases: "How does the scoring process work?", "What happens when a user clicks 'Submit'?", "How are webhook events processed?"
Automated Approach (Recommended)
sdd doc trace-entry <function_name> [--max-depth N]
Examples:
# Trace execution from FastAPI endpoint
sdd doc trace-entry process_scoring_request
# Trace with custom depth
sdd doc trace-entry handle_submit --max-depth 3
Output: Complete call chain, architectural layers, complexity analysis, hot spots
Manual Approach (For Custom Analysis)
If you need fine-grained control or the function name is unknown:
# 1. Find entry point
sdd doc search "[endpoint|route|handler].*[feature]"
# 2. Get callers/callees
sdd doc callees <function> # or call-graph for visualization
# 3. Describe key modules
sdd doc describe-module <module>
Workflow 2: TRACE-DATA-OBJECT
Goal: Follow a specific data structure or entity through its lifecycle.
Use cases: "What happens to a User object?", "How is OrderData transformed?", "Where is ConfigSettings used?"
Automated Approach (Recommended)
sdd doc trace-data <ClassName> [--include-properties]
Example: sdd doc trace-data User
Output: CRUD operations, usage map by layer, property access patterns
Manual Approach
# 1. Find class definition
sdd doc find-class <ClassName>
# 2. Find instantiation sites
sdd doc call-graph <ClassName> --direction both
# 3. Search for usage patterns
sdd doc search "create.*[Entity]|update.*[Entity]"
Workflow 3: IMPACT-ANALYSIS
Goal: Identify all code affected by modifying a function, class, or module.
Use cases: "What breaks if I refactor this function?", "What depends on this API endpoint?", "Can I safely delete this class?"
Automated Approach (Recommended)
sdd doc impact <entity> [--depth N]
Example: sdd doc impact calculate_score --depth 2
Output: Direct/indirect dependents, test coverage estimate, risk score (high/medium/low), actionable recommendations
Manual Approach
# 1. Find callers
sdd doc callers <function>
# 2. Find reverse dependencies
sdd doc dependencies <module> --reverse
# 3. Assess complexity
sdd doc complexity --module <module>
# 4. Check usage
sdd doc search "<exact-name>"
Note: For reverse dependencies, use import names not file paths (e.g., "myapp.utils.scoring" not utils/scoring.py). See "Understanding Reverse Dependencies" section.
Workflow 4: REFACTOR-PRIORITY
Goal: Identify high-complexity, high-impact code for refactoring.
Use cases: "What should I refactor first?", Technical debt reduction planning
Automated Approach (Recommended)
sdd doc refactor-candidates [--min-complexity N] [--limit N]
Example: sdd doc refactor-candidates --min-complexity 15
Output: Prioritized list by priority score (complexity × dependents), risk categorization, quick wins vs major refactors
Manual Approach
# 1. Find high-complexity functions
sdd doc complexity --threshold 15
# 2. For each, assess impact
sdd doc callers <function>
sdd doc dependencies <module> --reverse
Additional Manual Patterns
For specialized investigations without automated commands, combine basic queries:
EXPLORE-FEATURE-AREA
Goal: Comprehensive feature context gathering
sdd doc context "[feature]" # Get all related entities
sdd doc describe-module [key-modules] # Understand each layer
sdd doc complexity | grep "[feature]" # Find hot spots
FIND-PATTERN
Goal: Discover how patterns (validation, caching, auth) are implemented
sdd doc search "[pattern-keyword]" # Find implementations
sdd doc find-class ".*[Pattern].*" --pattern # Find pattern classes
sdd doc describe-module [pattern-file] # Understand architecture
ONBOARD-TO-CODEBASE
Goal: Get oriented in an unfamiliar codebase
sdd doc stats # Overview: size, complexity baseline
sdd doc search "main|index|app" # Find entry points
sdd doc list-modules # Understand architecture
sdd doc complexity --threshold 10 # Identify areas to avoid initially
TRACE-ERROR-FLOW
Goal: Understand error propagation and handling
sdd doc find-class ".*Error.*|.*Exception.*" --pattern # Find error types
sdd doc search "raise|throw|except" # Find error handling
sdd doc describe-module [error-module] # Understand error architecture
TRACE-CONFIGURATION
Goal: Track configuration usage
sdd doc find-class "[Config|Settings].*" --pattern # Find config classes
sdd doc search "get_settings|config" # Find access patterns
sdd doc dependencies [config-module] --reverse # Find consumers
TRACE-TEST-COVERAGE
Goal: Understand testing strategy
sdd doc list-modules | grep "test" # Find test files
sdd doc describe-module tests/[feature]_test.py # Understand test structure
sdd doc dependencies tests/[test-file] # See what's being tested
When to Use Manual Queries vs Automated Workflows
Use automated workflows (trace-entry, trace-data, impact, refactor-candidates) when:
- You have a specific function/class name
- You need comprehensive analysis with risk assessment
- You want architectural layer classification
- Time is limited and you need fast results
Use manual query patterns when:
- Exploring unfamiliar territory without specific targets
- Need custom analysis outside automated workflow scope
- Building integration scripts or custom tooling
- Investigating specialized patterns (error handling, config, tests)
- Learning the codebase structure from scratch
Complete Examples
Example 1: Tracing Execution Flow
Understand how the scoring feature works end-to-end:
sdd doc trace-entry run_scoring
Shows complete call chain from HTMX endpoint → scoring service → LLM service → OpenAI API, with architectural layer classification, complexity scores for each function, and hot spot identification.
Example 2: Impact Analysis for Refactoring
Assess the blast radius before refactoring get_session function:
sdd doc impact get_session
Returns risk level CRITICAL (complexity 85 × 50+ dependents), lists direct and indirect dependents, estimates test coverage, and provides actionable recommendations for safe refactoring approach.
Example 3: Finding Refactoring Candidates
Identify high-priority technical debt:
sdd doc refactor-candidates --min-complexity 15
Returns prioritized list sorted by risk score (complexity × dependents), categorizes by risk level, identifies quick wins (high complexity, low dependents) vs major refactors (high complexity, high dependents), with specific recommendations for each.
Performance Notes
- All queries are fast (milliseconds) as they only read JSON
- No source code parsing or AST analysis
- Documentation is cached in memory during a query session
- For large codebases (>1000 files), queries remain performant
For more information on generating documentation, see the Skill(sdd-toolkit:code-doc) skill.
For spec-driven development workflows, see Skill(sdd-toolkit:sdd-plan), Skill(sdd-toolkit:sdd-next), and Skill(sdd-toolkit:sdd-update) skills.