| name | mcp |
| description | This skill should be used when discovering, selecting, and using MCP servers throughout the workflow. Use when integrating with external services, databases, or APIs via MCP. |
MCP Servers
Purpose
Systematically discover, select, and use MCP servers throughout the workflow.
Discovery (Phase 0 - Intake)
At the start of every task:
List available MCP servers:
- Check connected servers and their status
- Note which are healthy vs failed/disconnected
Inventory capabilities per server: For each MCP server, document:
- Name and purpose
- Tools it provides
- Resources it exposes
- When to use it
Record in intake summary:
## Available Integrations | Integration | Type | Capabilities | Relevant to Task? | |-------------|------|--------------|-------------------| | gh CLI | CLI | issues, PRs, repos, actions | Yes - need to create PR | | database | MCP | query, schema | No | | browser | MCP | fetch, screenshot | Maybe - for docs research |
Selection Criteria
For GitHub operations, prefer gh CLI:
- Always use
ghCLI for issues, PRs, repos, actions, releases - More reliable than MCP (doesn't require separate server connection)
- Full feature coverage:
gh issue,gh pr,gh api,gh run, etc. - Falls back gracefully with clear error messages
Prefer MCP server when:
- It provides direct access to external system (database, API, service)
- It handles authentication/credentials you don't have
- It provides structured data vs scraping/parsing
- Built-in tools can't access the resource
- No equivalent CLI tool available
Prefer built-in tools when:
- Task is file system based (Read, Edit, Glob, Grep)
- MCP server would add unnecessary indirection
- Speed matters and MCP adds latency
Prefer code-based approach when:
- Multiple CLI/MCP calls needed (orchestrate in code)
- Results need filtering before entering context
- Complex logic around the API calls
Usage Patterns
Pattern 1: GitHub CLI for GitHub Operations
Task: Get current PR status
→ Use gh CLI: gh pr view 123 --json state,title,reviews
Task: List open issues
→ Use gh CLI: gh issue list --state open --json number,title,labels
Task: Create a PR
→ Use gh CLI: gh pr create --title "..." --body "..."
Task: Check workflow runs
→ Use gh CLI: gh run list --limit 5
Pattern 2: Code Orchestration for Multiple Operations
Task: Analyze all open issues for patterns
→ Write script that:
- Calls gh issue list --json ...
- Processes/categorizes in code
- Returns summary only
Pattern 3: MCP + Built-in Tool Combination
Task: Update code based on API schema
→ Use api MCP to fetch schema
→ Use Read/Edit for code changes
→ Use api MCP to validate changes
Pattern 4: Fallback Chain
Task: GitHub operations (issues, PRs, repos)
→ Primary: gh CLI (always available if authenticated)
→ Fallback: GitHub MCP (if connected)
→ Fallback: gh api for raw API access
Task: Fetch documentation
→ Try: docs MCP (if available)
→ Fallback: WebFetch tool
→ Fallback: WebSearch + WebFetch
Capability Mapping by Phase
| Phase | Integration Use Cases |
|---|---|
| Intake | Discovery: check gh auth, list available MCP servers |
| Design | Research: gh issue list, fetch API schemas, external docs |
| Implement | Integration: database queries, API calls, external services |
| Verify | Validation: gh pr checks, API contract testing, state checks |
| Review | Analysis: gh pr view, security scanning MCPs, code quality |
MCP Error Handling
If MCP server fails:
- Note the failure in scratchpad
- Check for fallback capability (built-in tool, alternative MCP)
- If no fallback and blocking: escalate
- If no fallback but non-blocking: document limitation, continue
If MCP returns unexpected data:
- Validate against expected schema
- If validation fails: don't proceed with bad data
- Log issue, try alternative approach or escalate
Anti-Patterns
Don't:
- Use GitHub MCP when
ghCLI is available and working - Assume MCP server is available without checking
- Make many sequential CLI/MCP calls when orchestration would work
- Ignore errors and proceed with incomplete data
- Use MCP for things built-in tools handle better (file operations)
- Forget to include integration capabilities in intake summary
Do:
- Use
ghCLI for all GitHub operations (issues, PRs, repos, actions) - Check
gh auth statusat intake for GitHub tasks - Check MCP health at intake for other integrations
- Match capabilities to task requirements
- Use code orchestration for multi-call scenarios
- Have fallback strategies when integrations fail
- Document which integrations were used in scratchpad
Memory: Files vs Knowledge Graph
Two complementary systems for persisting information:
| Aspect | Files (docs/scratch/, HANDOFF.md) |
Knowledge Graph (memory MCP) |
|---|---|---|
| Scope | Single task/session | Cross-session, cross-project |
| Content | Working state, formal artifacts | Facts, preferences, patterns |
| Lifetime | Archived when task completes | Persists indefinitely |
| Git-tracked | Yes | No |
| Queryable | Read whole file | Search by entity/relation |
Write to Files When:
- It's task-specific working state (intake notes, implementation progress)
- You want it git-tracked and version controlled
- It's a formal artifact (design doc, review notes, HANDOFF.md)
- It needs to be human-readable in the repo
Write to Knowledge Graph When:
- It's a user preference ("prefers functional style", "uses pytest not unittest")
- It's a cross-project pattern ("LOGOS repos use port offset pattern")
- It's a recurring behavior ("this user always wants TDD for algorithms")
- It's a codebase fact that rarely changes ("auth module lives in src/auth/")
- You'd want to recall it weeks later in a different context
Knowledge Graph Operations
Create entities for:
- User preferences discovered during conversation
- Project conventions learned from CLAUDE.md or code review
- Architecture patterns that span sessions
Create relations for:
- "User prefers X over Y"
- "Project uses pattern Z"
- "Module A depends on Module B"
Query before:
- Making style/convention decisions
- Choosing between implementation approaches
- Setting up new features in familiar codebases
Example Usage
Learned: User prefers small, focused commits
→ create_entities([{
name: "user-preference-commits",
entityType: "preference",
observations: ["Prefers small focused commits", "Dislikes large monolithic commits"]
}])
Learned: LOGOS uses specific port allocation
→ create_entities([{
name: "logos-port-convention",
entityType: "convention",
observations: ["Each repo has port offset: hermes +10000, apollo +20000, logos +30000, sophia +40000, talos +50000"]
}])
Before making a decision:
→ search_nodes("preference") to recall user preferences
→ search_nodes("convention") to recall project patterns