Claude Code Plugins

Community-maintained marketplace

Feedback

Use Gemini CLI for research with Google Search grounding and 1M token context

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name gemini-research
description Use Gemini CLI for research with Google Search grounding and 1M token context
allowed-tools Bash, Read, Write, TodoWrite, WebFetch, Glob, Grep

Gemini Research Skill

Purpose

Route research tasks to Gemini CLI when:

  • Real-time information is needed (Google Search grounding)
  • Context exceeds Claude's 200k limit (Gemini has 1M)
  • Need web-grounded factual answers

Unique Capability

What Gemini Does Better:

  • Google Search grounding for current information
  • 1M token context for massive document analysis
  • 70+ extensions (Figma, Stripe, Shopify, etc.)
  • Web content analysis with source attribution

When to Use

Perfect For:

  • Current events, recent documentation
  • Large codebase analysis (>150k tokens)
  • Literature reviews with many papers
  • Real-time API documentation lookup
  • Market research, competitor analysis

Don't Use When:

  • Offline/airgapped environments
  • Complex multi-step reasoning (use Claude)
  • Code generation requiring iteration (use Codex)

Usage

Basic Research

/gemini-research "What are the latest React 19 best practices?"

With Context Files

/gemini-research "Analyze architecture" --context @src/

Large Document Analysis

/gemini-research "Summarize all papers" --context papers/*.pdf

Command Pattern

bash scripts/multi-model/gemini-research.sh "<query>" "<task_id>" "json"

Memory Integration

Results stored to Memory-MCP:

  • Key: multi-model/gemini/research/{task_id}
  • Tags: WHO=gemini-cli, WHY=research

Output Format

{
  "content": "Research findings...",
  "sources": ["url1", "url2"],
  "model": "gemini-2.5-pro",
  "timestamp": "2025-12-28T..."
}

Handoff to Claude

After Gemini research completes:

  1. Results stored in Memory-MCP
  2. Claude agents read from memory key
  3. Use research to inform implementation
// Claude agent reads Gemini research
const research = memory_retrieve("multi-model/gemini/research/{task_id}");
Task("Coder", `Implement using: ${research.content}`, "coder");

Configuration

  • Retries: 3 attempts on failure
  • Timeout: 60 seconds per query
  • Fallback: Claude researcher agent if Gemini unavailable