| name | enact/firecrawl |
| version | 1.2.1 |
| description | Scrape, crawl, search, and extract structured data from websites using Firecrawl API - converts web pages to LLM-ready markdown |
| enact | 2.0 |
| from | python:3.12-slim |
| build | pip install requests |
| env | [object Object] |
| command | python /workspace/firecrawl.py ${action} ${url} ${formats} ${limit} ${only_main_content} ${prompt} ${schema} |
| timeout | 300s |
| license | MIT |
| tags | web-scraping, crawling, markdown, llm, ai, data-extraction, search, structured-data |
| annotations | [object Object] |
| inputSchema | [object Object] |
| outputSchema | [object Object] |
| examples | [object Object], [object Object], [object Object], [object Object], [object Object] |
Firecrawl Web Scraping Tool
A powerful web scraping tool that uses the Firecrawl API to convert websites into clean, LLM-ready markdown and extract structured data.
Features
- Scrape: Extract content from a single URL as markdown, HTML, or with screenshots
- Crawl: Automatically discover and scrape all accessible subpages of a website
- Map: Get a list of all URLs from a website without scraping content (extremely fast)
- Search: Search the web and get full scraped content from results
- Extract: Use AI to extract structured data from pages with natural language prompts
Setup
- Get an API key from firecrawl.dev
- Set your API key as a secret:
enact env set FIRECRAWL_API_KEY <your-api-key> --secret --namespace enact
This stores your API key securely in your OS keyring (macOS Keychain, Windows Credential Manager, or Linux Secret Service).
Usage Examples
Scrape a single page
enact run enact/firecrawl -a '{"url": "https://example.com", "action": "scrape"}'
Crawl an entire documentation site
enact run enact/firecrawl -a '{"url": "https://docs.example.com", "action": "crawl", "limit": 20}'
Map all URLs on a website
enact run enact/firecrawl -a '{"url": "https://example.com", "action": "map"}'
Search the web
enact run enact/firecrawl -a '{"url": "latest AI developments 2024", "action": "search", "limit": 5}'
Extract structured data with AI
enact run enact/firecrawl -a '{"url": "https://news.ycombinator.com", "action": "extract", "prompt": "Extract the top 10 news headlines with their URLs"}'
Extract with a JSON schema
enact run enact/firecrawl -a '{
"url": "https://example.com/pricing",
"action": "extract",
"prompt": "Extract pricing information",
"schema": "{\"type\":\"object\",\"properties\":{\"plans\":{\"type\":\"array\",\"items\":{\"type\":\"object\",\"properties\":{\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"string\"}}}}}}"
}'
Output
The tool returns JSON with:
- markdown: Clean, LLM-ready content
- metadata: Title, description, language, source URL
- extract: Structured data (for extract action)
- links: Discovered URLs (for map action)
API Features
Firecrawl handles the hard parts of web scraping:
- Anti-bot mechanisms
- Dynamic JavaScript content
- Proxies and rate limiting
- PDF and document parsing
- Screenshot capture