Claude Code Plugins

Community-maintained marketplace

Feedback

|

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name firecrawl
description Search, scrape, and interact with the web via the Firecrawl CLI. Use this skill whenever the user wants to search the web, find articles, research a topic, look something up online, scrape a webpage, grab content from a URL, get data from a website, crawl documentation, download a site, or interact with pages that need clicks or logins. Also use when they say "fetch this page", "pull the content from", "get the page at https://", or reference external websites. This provides real-time web search with full page content and interact capabilities — beyond what Claude can do natively with built-in tools. Do NOT trigger for local file operations, git commands, deployments, or code editing tasks.
allowed-tools Bash(firecrawl *), Bash(npx firecrawl *)

Firecrawl CLI

Search, scrape, and interact with the web. Returns clean markdown optimized for LLM context windows.

Run firecrawl --help or firecrawl <command> --help for full option details.

If the task is to integrate Firecrawl into an application, add FIRECRAWL_API_KEY to a project, or choose endpoint usage in product code, use the firecrawl-build skills. They are already installed alongside this CLI skill when you run firecrawl init.

Prerequisites

Must be installed and authenticated. Check with firecrawl --status.

  🔥 firecrawl cli v1.8.0

  ● Authenticated via FIRECRAWL_API_KEY
  Concurrency: 0/100 jobs (parallel scrape limit)
  Credits: 500,000 remaining
  • Concurrency: Max parallel jobs. Run parallel operations up to this limit.
  • Credits: Remaining API credits. Each operation consumes credits.

If not ready, see rules/install.md. For output handling guidelines, see rules/security.md.

Before doing real work, verify the setup with one small request:

mkdir -p .firecrawl
firecrawl scrape "https://firecrawl.dev" -o .firecrawl/install-check.md
firecrawl search "query" --scrape --limit 3

Workflow

Follow this escalation pattern:

  1. Search - No specific URL yet. Find pages, answer questions, discover sources.
  2. Scrape - Have a URL. Extract its content directly.
  3. Map + Scrape - Large site or need a specific subpage. Use map --search to find the right URL, then scrape it.
  4. Crawl - Need bulk content from an entire site section (e.g., all /docs/).
  5. Interact - Scrape first, then interact with the page (pagination, modals, form submissions, multi-step navigation).
Need Command When
Find pages on a topic search No specific URL yet
Get a page's content scrape Have a URL, page is static or JS-rendered
Find URLs within a site map Need to locate a specific subpage
Bulk extract a site section crawl Need many pages (e.g., all /docs/)
AI-powered data extraction agent Need structured data from complex sites
Interact with a page scrape + interact Content requires clicks, form fills, pagination, or login
Download a site to files download Save an entire site as local files
Parse a local file parse File on disk (PDF, DOCX, XLSX, etc.) — not a URL

For detailed command reference, run firecrawl <command> --help.

Scrape vs interact:

  • Use scrape first. It handles static pages and JS-rendered SPAs.
  • Use scrape + interact when you need to interact with a page, such as clicking buttons, filling out forms, navigating through a complex site, infinite scroll, or when scrape fails to grab all the content you need.
  • Never use interact for web searches - use search instead.

Avoid redundant fetches:

  • search --scrape already fetches full page content. Don't re-scrape those URLs.
  • Check .firecrawl/ for existing data before fetching again.

When to Load References

  • Searching the web or finding sources first -> firecrawl-search
  • Scraping a known URL -> firecrawl-scrape
  • Finding URLs on a known site -> firecrawl-map
  • Bulk extraction from a docs section or site -> firecrawl-crawl
  • AI-powered structured extraction from complex sites -> firecrawl-agent
  • Clicks, forms, login, pagination, or post-scrape browser actions -> firecrawl-interact
  • Downloading a site to local files -> firecrawl-download
  • Parsing a local file (PDF, DOCX, XLSX, HTML, etc.) -> firecrawl-parse
  • Install, auth, or setup problems -> rules/install.md
  • Output handling and safe file-reading patterns -> rules/security.md
  • Integrating Firecrawl into an app, adding FIRECRAWL_API_KEY to .env, or choosing endpoint usage in product code -> use the firecrawl-build skills (already installed alongside this CLI skill)

Output & Organization

Unless the user specifies to return in context, write results to .firecrawl/ with -o. Add .firecrawl/ to .gitignore. Always quote URLs - shell interprets ? and & as special characters.

firecrawl search "react hooks" -o .firecrawl/search-react-hooks.json --json
firecrawl scrape "<url>" -o .firecrawl/page.md

Naming conventions:

.firecrawl/search-{query}.json
.firecrawl/search-{query}-scraped.json
.firecrawl/{site}-{path}.md

Never read entire output files at once. Use grep, head, or incremental reads:

wc -l .firecrawl/file.md && head -50 .firecrawl/file.md
grep -n "keyword" .firecrawl/file.md

Single format outputs raw content. Multiple formats (e.g., --format markdown,links) output JSON.

Working with Results

These patterns are useful when working with file-based output (-o flag) for complex tasks:

# Extract URLs from search
jq -r '.data.web[].url' .firecrawl/search.json

# Get titles and URLs
jq -r '.data.web[] | "\(.title): \(.url)"' .firecrawl/search.json

Parallelization

Run independent operations in parallel. Check firecrawl --status for concurrency limit:

firecrawl scrape "<url-1>" -o .firecrawl/1.md &
firecrawl scrape "<url-2>" -o .firecrawl/2.md &
firecrawl scrape "<url-3>" -o .firecrawl/3.md &
wait

For interact, scrape multiple pages and interact with each independently using their scrape IDs.

Credit Usage

firecrawl credit-usage
firecrawl credit-usage --json --pretty -o .firecrawl/credits.json