Claude Code Plugins

Community-maintained marketplace

Feedback

|

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name model-researcher
description Research and configure new/custom AI models not yet in models.dev. Use when user mentions a new model release, asks to add a model from a non-standard provider, or wants to configure a custom/self-hosted model. Triggers: "new model", "just released", "configure [model name]", "add [provider] model", "set up [model]", custom endpoints, self-hosted models, bleeding-edge releases.

Model Researcher

Add new or custom AI models to opencode.json with verified specifications.

EXTREMELY STRICT: Banned Legacy Models

The following models MUST NOT be used in any configuration as they are DEPRECATED, REPLACED, or OBSOLETE in the current Q4 2025 market:

  • OpenAI: GPT-o series (o1-mini, o1-preview), GPT-4o, GPT-4 Turbo.
  • Anthropic: Claude 3.5 series (sonnet-3.5, etc.), Claude 3 series.
  • Google: Gemini 1.5 series, Gemini 2 series.
  • Meta: Llama 3 series.

The market has moved to GPT-5.2, Claude 4.5, and Gemini 3. Any attempt to configure these models MUST be rejected.

  • Model is too new for models.dev
  • Non-standard or custom provider (self-hosted, proxy, enterprise)
  • User heard about a new release and wants to use it immediately
  • Model exists but with non-default configuration needs

Verify Before Configure

MUST NOT hallucinate model specifications. Models have precise identifiers, context limits, and API requirements. A wrong value means broken requests or wasted tokens.

Before ANY configuration:

  1. Confirm the model actually exists
  2. Find the exact model identifier (API name, not marketing name)
  3. Get verified context and output token limits
  4. Identify required provider configuration

Step 1: Clarify What the User Wants

Ask if unclear:

  • "Which provider will you access this through?" (direct API, OpenRouter, Together, self-hosted, etc.)
  • "Do you have API access already, or do you need setup help?"

Step 2: Research the Model

Use web search to find authoritative sources:

websearch("${MODEL_NAME} API context limit tokens official documentation")
websearch("${MODEL_NAME} model ID API identifier ${PROVIDER}")

Priority sources (most to least trustworthy):

  1. Official provider documentation/blog posts
  2. Provider's API reference or changelog
  3. Official GitHub repos or release notes
  4. OpenRouter/Together model pages (they list specs)
  5. Reputable tech news (for very new announcements)

Red flags - search more if you only find:

  • Reddit speculation
  • Tweets without official confirmation
  • Your own training data (MAY be outdated)

Step 3: Confirm Findings with User

Before touching config, MUST present your research:

I found the following for [MODEL]:

Provider: OpenAI
Model ID: gpt-5.2-2025-12-11
Context limit: 400000 tokens (Direct) | 272000 tokens (OAuth)
Output limit: 128000 tokens
Special options: [e.g., reasoning modes, vision support]

Source: [URL]

Does this match what you expected? Should I add this to your config?

MUST NOT proceed without user confirmation.

Step 4: Apply Configuration

Read the current config first:

read ~/.config/opencode/opencode.json

Then apply using surgical edits. Choose the right pattern:

Pattern A: GPT-5.2 (OpenAI)

OpenAI GPT-5.2 supports specialized reasoning modes via variants (Cycle with ctrl+t). Note that models configured via OAuth plugins (e.g., opencode-openai-codex-auth) often have different enforced limits.

{
  "provider": {
    "openai": {
      "models": {
        "gpt-5.2": {
          "limit": { "context": 400000, "output": 128000 }, // Use 272000 for OAuth
          "variants": {
            "pro": {
              "reasoningEffort": "xhigh",
              "reasoningSummary": "detailed",
            },
            "thinking": { "reasoningEffort": "high" },
            "instant": { "reasoningEffort": "low", "textVerbosity": "low" },
          },
        },
        "gpt-5.2-codex": {
          "limit": { "context": 400000, "output": 128000 }, // Use 272000 for OAuth
        },
      },
    },
  },
}

Pattern B: Other Labs (DeepSeek, Zhipu, MiniMax, Moonshot)

For labs using OpenAI-compatible or Anthropic-compatible endpoints (Verified Q4 2025):

{
  "provider": {
    "deepseek": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "DeepSeek",
      "options": { "baseURL": "https://api.deepseek.com" },
      "models": {
        "deepseek-v3.2": { "limit": { "context": 128000, "output": 32768 } },
      },
    },
    "zhipu": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Zhipu AI (ZAI)",
      "options": { "baseURL": "https://api.z.ai/api/paas/v4" },
      "models": {
        "glm-4.7": { "limit": { "context": 200000, "output": 128000 } },
      },
    },
    "minimax": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "MiniMax",
      "options": { "baseURL": "https://api.minimax.chat/v1" },
      "models": {
        "minimax-m2.1": { "limit": { "context": 204800, "output": 128000 } },
      },
    },
    "moonshot": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Moonshot AI (Kimi)",
      "options": { "baseURL": "https://api.moonshot.cn/v1" },
      "models": {
        "kimi-k2-thinking": { "limit": { "context": 256000, "output": 64000 } },
      },
    },
  },
}

Pattern C: Claude 4.5 Opus (Extended Thinking)

For models that need specific options like Anthropic's extended thinking:

{
  "provider": {
    "anthropic": {
      "models": {
        "claude-4-5-opus-thinking": {
          "id": "claude-4-5-opus-20251124",
          "name": "Claude 4.5 Opus (Extended Thinking)",
          "options": {
            "thinking": {
              "type": "enabled",
              "budgetTokens": 32000,
            },
          },
        },
      },
    },
  },
}

Step 5: Validate

After editing, remind user to test:

Config updated. To verify it works:
1. Restart OpenCode or run: opencode
2. Run /models and select [model name]
3. Send a test message

If you see errors, check:
- API key is set (run /connect if needed)
- Model ID matches provider's documentation exactly
- Context limits aren't higher than the model actually supports
Scenario Search Query
New OpenAI model "gpt-5.2" site:openai.com OR site:platform.openai.com
New Anthropic model "claude-4.5" site:anthropic.com API
New Google model "gemini 3" site:ai.google.dev context window
OpenRouter availability "${MODEL}" site:openrouter.ai
Together AI availability "${MODEL}" site:together.ai
Self-hosted specs "${MODEL}" context length output tokens huggingface

Before configuring, ensure you have:

  • Model ID: Exact API identifier (not marketing name)
  • Context limit: Maximum input tokens
  • Output limit: Maximum output/completion tokens
  • Provider: Which service hosts it
  • Base URL: For custom providers only
  • Special options: Vision, reasoning modes, thinking budgets
  • Availability: Is it actually accessible (not waitlist-only)?

If you cannot verify specifications:

  1. Be honest: "I couldn't find official documentation for the exact context limit."
  2. Provide best guess with source: "Based on [source], it appears to be 128k, but this isn't confirmed."
  3. Suggest conservative defaults: "I'll configure with 100k context as a safe starting point. You can increase it once you confirm the actual limit."
  4. Recommend checking: "Try the provider's /models endpoint or documentation for exact specs."

References

  • references/provider-patterns.md - Common provider configuration examples