Claude Code Plugins

Community-maintained marketplace

Feedback

MCP Builder — FastMCP Workflow

@jscraik/Cortex-OS
0
0

Create workflow-oriented MCP servers in Python or TypeScript using FastMCP with structured tools, actionable errors, and evaluation-ready quality gates.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

id skill-mcp-builder
name MCP Builder — FastMCP Workflow
description Create workflow-oriented MCP servers in Python or TypeScript using FastMCP with structured tools, actionable errors, and evaluation-ready quality gates.
version 1.0.0
author brAInwav Development Team
owner @jamiescottcraik
category integration
difficulty advanced
tags mcp, fastmcp, integration, api, automation, tooling
resources ./resources/anthropic-mcp-builder-reference.md, ./resources/scripts/python-fastmcp-todo.py, ./resources/scripts/typescript-fastmcp-todo.ts, ./resources/evaluations/mcp-evaluation-template.xml, ./resources/LICENSE.txt
estimatedTokens 5200
license Complete terms in LICENSE.txt
requiredTools python, typescript, fastmcp, zod, pydantic
prerequisites Read the latest Model Context Protocol specification, Familiarity with REST or GraphQL APIs and auth flows, Comfort with designing typed request/response models
relatedSkills skill-api-integration-standards, skill-testing-evidence-triplet
deprecated false
replacedBy null
impl packages/mcp-toolkit/src/skills/mcp-builder.ts#exportMcpBuilder
inputs [object Object]
outputs [object Object]
preconditions Governance pack reviewed (`/.cortex/rules/`), especially RULES_OF_AI and Skills System Governance., North-star acceptance test drafted for the target workflows., API documentation and auth flows fully enumerated.
sideEffects Writes Local Memory effectiveness entries linking the MCP server to downstream task metrics., Generates MCP Inspector transcripts for audit.
estimatedCost $0.006 / build cycle (~1200 tokens for planning + evaluation).
calls skill-tdd-red-green-refactor, skill-financial-ratio-analysis
requiresContext memory://skills/skill-mcp-builder/historical-evaluations
providesContext memory://skills/skill-mcp-builder/latest-scaffold
monitoring true
lifecycle [object Object]
estimatedDuration PT45M
i18n [object Object]
persuasiveFraming [object Object]
observability [object Object]
governance [object Object]
schemaStatus [object Object]

MCP Server Development Guide (FastMCP · Python & TypeScript)

When to Use

  • You must ship a new MCP server that lets an LLM accomplish an end-to-end workflow (not just thin API wrappers).
  • Existing connectors consistently time out, truncate, or return ambiguous errors and need a standards-compliant rebuild.
  • Security or governance reviews require evidence that tool inputs/outputs are validated and auditable.
  • You are onboarding a new API integration and need a repeatable plan for Python (FastMCP) or TypeScript deployments.
  • Evaluations reveal low success rates for agents using current MCP tools and you need a remediation playbook.

How to Apply

  1. Perform deep research: read the MCP spec, framework docs, and target API guides; capture decisions in the task folder.
  2. Define workflows and tool inventory, prioritising task-complete actions and high-signal responses.
  3. Scaffold infrastructure (auth, pagination, truncation) and shared utilities before adding tools.
  4. Implement tools with typed inputs, structured outputs, annotations, and actionable error paths in FastMCP (Python or TypeScript).
  5. Build evaluations (≥10) and run MCP Inspector/automated tests; capture evidence logs, coverage, and Local Memory outcomes.

Success Criteria

  • Tool set covers every workflow in the implementation plan with no dangling TODO/HACK markers.
  • Structured outputs (or compact JSON + summaries) validated via MCP Inspector and automated tests.
  • Error paths return actionable remediation hints and avoid protocol-level failures for domain issues.
  • Evaluation pack (10 read-only tasks) passes at ≥90% success rate in automated agent runs.
  • Evidence Triplet logged: failing inspector run, passing inspector run, plus evaluation report stored in Local Memory with effectiveness ≥0.8.

0) Mission Snapshot — What / Why / Where / How / Result

  • What: Deliver FastMCP servers that expose high-quality tools/resources enabling LLMs to complete real tasks end-to-end.
  • Why: Workflow-centric, context-aware tools reduce hallucinations, review churn, and production incidents.
  • Where: Applies to Cortex-OS MCP connectors (stdio/SSE/HTTP) deployed across apps, agents, and CLI surfaces.
  • How: Follow structured planning, typed implementation, observability hooks, and evaluation cycles described herein.
  • Result: Deployable MCP server with evidence of reliability, actionable errors, and maintained evaluation suites.

1) Contract — Inputs → Outputs

Inputs arrive as task context (target API docs, auth, workflow definitions, language choice). The skill outputs code scaffolds (Python or TypeScript FastMCP server), documentation, evaluation suites, and observability artifacts. Inputs/outputs are versioned in the task folder and cross-linked to Local Memory entries for traceability.

2) Preconditions & Safeguards

  • Governance pack and RULES_OF_AI reviewed; waivers logged if necessary.
  • API auth scopes, rate limits, and error models fully documented.
  • Vibe check, connector health, and SBOM workflows executed prior to implementation.
  • North-star acceptance test recorded (e.g., “Agent schedules meeting via MCP calendar server”).
  • Feature flag strategy defined for rollout (e.g., CORTEX_MCP_<SERVICE>_BETA).

3) Implementation Playbook (RED→GREEN→REFACTOR or analogous phases)

  1. Research & Plan (RED): Gather protocol updates (structured outputs, annotations), API docs, workflow inventory, and build the implementation plan with risk assessment.
  2. Infrastructure Scaffold (GREEN): Set up FastMCP project (Python or TypeScript), implement auth, pagination, truncation helpers, and shared formatting utilities with tests.
  3. Tool Construction: Add workflow-centric tools, annotate behaviours (readonly/idempotent/destructive), implement structured outputs, and ensure error remediation hints.
  4. Observability & Docs: Add logging, metrics, and inspector scripts; document tool usage and parameters in markdown/README.
  5. Evaluation & Hardening (REFACTOR): Create ≥10 evaluations, run MCP Inspector, integrate automated agent tests, tune responses, and record Evidence Triplet + Local Memory outcomes.

4) Observability & Telemetry Hooks

  • Emit [brAInwav] structured logs with tool name, workflow, duration, truncation decisions.
  • Capture metrics (mcp.tools_defined, mcp.workflows_covered, mcp.evaluations_pass_rate).
  • Publish MCP Inspector transcripts and evaluation results to inspector-transcripts/ and evaluations/ directories.
  • Wire alerts for evaluation regressions (e.g., pass rate <90%) and tool errors >2%.

5) Safety, Compliance & Governance

  • Respect RULES_OF_AI instrumentation: brand logs, audit evidence, and follow security guidelines (no fake telemetry).
  • Validate inputs via Pydantic/Zod; sanitize external fields; enforce truncation/pagination policies.
  • Document auth handling, scopes, token storage, and secrets management (1Password op run).
  • Ensure destructive tools require explicit parameters and provide rollback instructions.
  • Maintain Local Memory parity (store skill application IDs in json/memory-ids.json).

6) Success Criteria & Acceptance Tests

  • Automated tests (unit + integration) cover each tool; mutation/coverage thresholds met (≥90% where enforced).
  • MCP Inspector scripted run passes all workflows without manual intervention.
  • Evaluation XML file validates at least ten read-only tasks with canonical answers.
  • Deployment checklist completed (auth secrets, SBOM, trace context verification, documentation links).
  • Reviewer checklist (from .cortex/rules/code-review-checklist.md) signed off with no blockers.

7) Failure Modes & Recovery

  • API outages/rate limits: Implement retries, exponential backoff, and informative errors; document fallback behaviours.
  • Context overflows: Default to high-signal summaries; offer detail flags and pagination; monitor truncation metrics.
  • Schema drift: Regenerate structured outputs or update Standard Schema models; rerun evaluations.
  • Auth expiration: Automate token refresh; bubble actionable errors prompting credential renewal.
  • Evaluation regressions: Use inspector transcripts to triage; update tools or prompts; rerun pack before deploy.

8) Worked Examples & Snippets

  • Python FastMCP: Code sample creating todos.* tools with structured outputs, annotations, and error handling.
  • TypeScript FastMCP: Snippet using FastMCP, Zod schemas, and multiple transports (stdio/SSE/HTTP).
  • Inspector Workflow: Commands for fastmcp run server.py, npx fastmcp dev src/server.ts, and npx @modelcontextprotocol/inspector to exercise tools end-to-end.

9) Memory & Knowledge Integration

  • After completing a build, store a Local Memory entry (skillUsed: "skill-mcp-builder", effectivenessScore ≥0.8) capturing outcomes, evaluation pass rate, and notable issues.
  • Link related memories (API lessons, auth mitigations) via relationships({ relationship_type_enum: "reinforces" }).
  • Update memory-ids.json and cross-reference in PRs for audit readiness.

10) Lifecycle & Versioning Notes

  • Version skill guides alongside MCP spec releases; track structured output support and annotation changes.
  • When FastMCP v2 ships, document migration steps (e.g., new transports or schema helpers).
  • Deprecate older tool patterns once registry v1 enforces output schemas; provide upgrade checklist.

11) References & Evidence

  • Model Context Protocol specification (2025-06-18) and schema reference (structured outputs, outputSchema).
  • FastMCP documentation for Python and TypeScript (decorators, Standard Schema, CLI tooling).
  • MCP Inspector docs for debugging and evaluation.
  • brAInwav governance pack: RULES_OF_AI, Testing Standards, Skills System Governance.
  • Evaluation artefacts: evaluations/mcp-builder/questions.xml, inspector transcripts, and Local Memory IDs.

12) Schema Gap Checklist

  • Update skill once FastMCP exposes outputSchema helpers in TypeScript.
  • Add automated script to validate evaluation XML against schema.
  • Wire skills-lint to ensure annotations are documented for every destructive tool.

For the full extended guide and canonical reference implementations, consult the resources bundled with this skill (see resources/anthropic-mcp-builder-reference.md).