| name | plugin-development |
| description | Guides creation of Claude Agent SDK v2 plugins for the Model Indexer. Covers plugin structure, manifests, commands, skills, hooks, and parallel subagent execution. |
Plugin Development Skill
This skill covers creating Claude Agent SDK plugins for provider indexing.
Plugin vs Skills
| Aspect | Plugins | Skills |
|---|---|---|
| Structure | Container with manifest + extensions | Single SKILL.md file |
| Contains | Commands, agents, skills, hooks, MCP | One capability |
| Sharing | Cross-project, publishable | Project-specific |
| Loading | Via plugins config option |
Via settingSources |
| Namespace | Commands prefixed with plugin name | Direct access |
Plugin Directory Structure
plugins/
├── openai-provider-plugin/
│ ├── .claude-plugin/
│ │ └── plugin.json # Required manifest
│ ├── commands/
│ │ └── index-openai.md # /openai-provider:index-openai
│ ├── skills/
│ │ └── openai-provider/
│ │ └── SKILL.md # Field mappings
│ ├── hooks/
│ │ └── hooks.json # Validation hooks
│ ├── scripts/
│ │ ├── fetch-models.ts # Fetch from API
│ │ ├── check-coverage.ts # Analyze gaps
│ │ └── validate-inference.ts # Validate LLM outputs
│ └── data/
│ ├── known-models.json # Static fallback
│ └── pricing.json # Manual pricing
├── anthropic-provider-plugin/
├── google-provider-plugin/
├── openrouter-provider-plugin/
└── common-indexer-plugin/
├── skills/
│ ├── data-inference/SKILL.md
│ └── data-validation/SKILL.md
└── hooks/
└── hooks.json # Schema validation
Plugin Manifest
// .claude-plugin/plugin.json
{
"name": "openai-provider",
"version": "1.0.0",
"description": "Index models from OpenAI API with field coverage tracking",
"author": "AI Model Registry",
"repository": "https://github.com/your-org/ai-model-registry",
"keywords": ["openai", "model-indexer", "ai-registry"],
"claude-code": {
"minVersion": "1.0.0"
}
}
Loading Plugins
// src/agent/index.ts
import { query } from '@anthropic-ai/claude-agent-sdk';
import * as path from 'node:path';
const pluginsDir = path.resolve('plugins');
export async function runIndexerAgent(options: {
providers: string[];
outputDir: string;
}) {
const plugins = options.providers.map((provider) => ({
type: 'local' as const,
path: path.join(pluginsDir, `${provider}-provider-plugin`),
}));
// Always include common plugin
plugins.push({
type: 'local' as const,
path: path.join(pluginsDir, 'common-indexer-plugin'),
});
for await (const message of query({
prompt: buildOrchestratorPrompt(options),
options: {
plugins,
settingSources: ['project'],
allowedTools: ['Read', 'Write', 'Bash', 'Task', 'Glob', 'Grep', 'Skill'],
cwd: process.cwd(),
maxTurns: 50,
},
})) {
// Handle messages
}
}
Plugin Command Example
<!-- commands/index-openai.md -->
---
command: index-openai
description: Index all models from OpenAI API
---
# Index OpenAI Models
## Steps
1. **Fetch Models**
```bash
tsx plugins/openai-provider-plugin/scripts/fetch-models.ts > /tmp/openai-raw.json
Analyze Coverage
tsx plugins/openai-provider-plugin/scripts/check-coverage.ts /tmp/openai-raw.jsonReview Coverage Report
- Fields provided by API
- Fields needing derivation
- Fields requiring LLM inference
Infer Missing Data Use data-inference skill with prompts from coverage report.
Transform to AIModel Convert raw + inferred data to AIModel schema.
Validate
tsx plugins/openai-provider-plugin/scripts/validate-inference.ts /tmp/openai-models.json
## Provider Skill Template
```markdown
<!-- skills/openai-provider/SKILL.md -->
---
name: openai-provider
description: Index models from OpenAI API. Knows which fields are provided vs need inference.
allowed-tools: Bash, Read, Write, WebFetch
---
# OpenAI Provider Indexing
## API Endpoints
- Models list: `GET https://api.openai.com/v1/models`
- Requires: `OPENAI_API_KEY` header
## Field Coverage
### Provided by API (confidence: 1.0)
| Field | API Path | Transform |
|-------|----------|-----------|
| `id` | `id` | none |
| `created` | `created` | Unix → ISO date |
### Derived from ID (confidence: 0.8)
| Field | Pattern | Example |
|-------|---------|---------|
| `modelPublishedAt` | `-YYYYMMDD` suffix | `gpt-4-0613` → `2023-06-13` |
### NOT Provided - Requires Inference
| Field | Inference Source | Confidence |
|-------|------------------|------------|
| `pricing.input` | OpenRouter or docs | 0.9 |
| `limits.contextWindow` | LLM knowledge | 0.85 |
| `features.vision` | Model name pattern | 0.9 |
## Known Values
- gpt-4: 8,192 context
- gpt-4-turbo: 128,000 context
- gpt-4o: $2.50/$10.00 per 1M tokens
Validation Hooks
// hooks/hooks.json
{
"hooks": [
{
"event": "PreToolUse",
"path": "./hooks/validate-model-write.ts",
"pattern": {
"tool_name": "Write",
"tool_input.file_path": "**/models*.json"
}
}
]
}
// hooks/validate-model-write.ts
import type { HookInput, HookOutput } from '@anthropic-ai/claude-agent-sdk';
import { AIModelSchema } from '@ai-model-registry/spec/schemas';
export default async function validateModelWrite(
input: HookInput,
): Promise<HookOutput> {
if (input.hook_event_name !== 'PreToolUse') return {};
if (input.tool_name !== 'Write') return {};
const filePath = input.tool_input?.file_path ?? '';
if (!filePath.includes('models') || !filePath.endsWith('.json')) {
return {};
}
try {
const content = input.tool_input?.content ?? '';
const data = JSON.parse(content);
const models = data.models ?? data.data ?? data;
const errors: string[] = [];
for (const model of models) {
const result = AIModelSchema.safeParse(model);
if (!result.success) {
errors.push(`${model.canonicalSlug}: ${result.error.message}`);
}
}
if (errors.length > 0) {
return {
hookSpecificOutput: {
hookEventName: 'PreToolUse',
permissionDecision: 'deny',
permissionDecisionReason: `Schema validation failed:\n${errors.slice(0, 5).join('\n')}`,
},
};
}
} catch (e) {
return {
hookSpecificOutput: {
hookEventName: 'PreToolUse',
permissionDecision: 'deny',
permissionDecisionReason: `Invalid JSON: ${e}`,
},
};
}
return {};
}
Parallel Subagent Execution
// Orchestrator spawns subagents in parallel
const prompt = `
You are the AI Model Registry orchestrator.
## CRITICAL: Parallel Execution
Spawn all provider subagents in a SINGLE message with MULTIPLE Task tool calls.
Example - spawn 3 in parallel:
In one message, make 3 Task tool calls:
- Task(subagent_type="general-purpose", description="Index OpenAI")
- Task(subagent_type="general-purpose", description="Index Anthropic")
- Task(subagent_type="general-purpose", description="Index Google")
## Per-Provider Task Prompt
"You are indexing {PROVIDER} models.
1. Read: plugins/{PROVIDER}-provider-plugin/skills/{PROVIDER}-provider/SKILL.md
2. Run: tsx plugins/{PROVIDER}-provider-plugin/scripts/fetch-models.ts
3. Run: tsx plugins/{PROVIDER}-provider-plugin/scripts/check-coverage.ts
4. Use data-inference skill for missing fields
5. Return JSON: { provider, models, coverage, inferences }"
`;
Execution Flow
┌─────────────────────────────────────────────────────────────────┐
│ ORCHESTRATOR AGENT │
│ │
│ Spawn all providers in PARALLEL (single message, multi-Task) │
│ │
│ Task(OpenAI) Task(Anthropic) Task(Google) │
│ ↓ ↓ ↓ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ OpenAI │ │Anthropic│ │ Google │ PARALLEL │
│ │ Subagent│ │ Subagent│ │ Subagent│ │
│ │ │ │ │ │ │ │
│ │ Fetch │ │ Fetch │ │ Fetch │ │
│ │ Check │ │ Check │ │ Check │ │
│ │ Infer │ │ Infer │ │ Infer │ │
│ └────┬────┘ └────┬────┘ └────┬────┘ │
│ └───────────────┼─────────────────┘ │
│ ↓ │
│ ┌─────────────┐ │
│ │ MERGE │ │
│ │ Deduplicate │ SEQUENTIAL │
│ │ Best source │ │
│ └──────┬──────┘ │
│ ↓ │
│ ┌─────────────┐ │
│ │ VALIDATE │ │
│ └──────┬──────┘ │
│ ↓ │
│ ┌─────────────┐ │
│ │ OUTPUT │ │
│ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Creating a New Provider Plugin
# 1. Create directory structure
mkdir -p plugins/PROVIDER-provider-plugin/{.claude-plugin,commands,skills/PROVIDER-provider,hooks,scripts,data}
# 2. Create manifest
cat > plugins/PROVIDER-provider-plugin/.claude-plugin/plugin.json << 'EOF'
{
"name": "PROVIDER-provider",
"version": "1.0.0",
"description": "Index models from PROVIDER API"
}
EOF
# 3. Create skill (document API and field mappings)
# 4. Create scripts (fetch, check-coverage, validate)
# 5. Create command (workflow documentation)
Benefits vs Traditional Pipeline
| Aspect | Traditional | Agent + Plugins |
|---|---|---|
| Coverage | Hardcoded | Documented in skills |
| Inference | Fixed prompts | Adapts to skill knowledge |
| Validation | Schema only | Agent catches logic issues |
| Extensibility | Code changes | Add/update skill files |
| Debugging | Log analysis | Agent explains reasoning |
| Parallelism | Manual Promise.all | Natural subagent spawning |
When to Use Each Mode
Traditional Pipeline (pnpm index):
- CI/CD scheduled runs
- Predictable, fast execution
- Stable provider coverage
Agent Mode (pnpm agent):
- Adding new providers
- Investigating data quality
- Complex inference needs
- Exploring provider capabilities
Implementation Checklist
- Create plugin directory structure
- Add plugin manifest (plugin.json)
- Create provider skill with field coverage
- Implement fetch/check-coverage scripts
- Add validation hooks
- Create command for full workflow
- Test parallel subagent execution
- Document in plugins/README.md
Full Details
→ docs/specs/03-model-indexer/impl_03_model_indexer.md (Plugin Architecture section)