| name | raw-tool-creator |
| description | Create reusable RAW tools that workflows can import. Use when the user asks to create a tool, extract reusable functionality, or build a new capability for workflows. |
RAW Tool Creator Skill
Create reusable RAW tools for workflows.
When to Use This Skill
Use this skill when:
- A workflow needs functionality that should be reusable
- The user asks to create a tool for a specific task
- You identify repeated logic that should be extracted
Key Directives
- ALWAYS search existing tools first - Run
raw search "<capability>"(checks local and remote registry) - PREFER installation - If a remote tool exists, use
raw install <url>instead of creating a new one - ALWAYS use
raw create --toolto scaffold - do not manually create directories - ALWAYS implement
tool.pywith the actual logic - scaffolds without code are useless - ALWAYS write tests in
test.py- untested tools are unreliable - Single responsibility - One tool does one thing well
- Use underscores in names - Tool names become Python modules (
web_scraper, notweb-scraper)
Prerequisites Checklist
Before creating a tool:
- RAW is initialized (
.raw/directory exists) - Searched
raw searchand found NO suitable local or remote tool - Clear understanding of inputs and outputs
Requirements Validation (Ask Before Building)
Before implementing, ask clarifying questions when:
| Ambiguity | Example Question |
|---|---|
| Input format unclear | "Should the function accept a file path or file contents?" |
| Output structure unspecified | "Should this return a dict, list, or a custom object?" |
| Error handling unclear | "Should errors raise exceptions or return error objects?" |
| API/provider choice | "Should I use requests or httpx for HTTP calls?" |
| Scope ambiguous | "Should this tool also handle pagination, or just single requests?" |
If only one reasonable approach exists, proceed without asking.
Tool Creation Process
Step 1: Search Existing Tools
raw search "stock data" # Semantic search (PREFERRED)
raw search "fetch prices" # Try different phrasings
raw list tools # Browse local tools
If a remote tool is found:
raw install <git-url>
# Done! No need to create a new tool.
If NO tool exists: Proceed to Step 2.
Step 2: Create Tool Scaffold
raw create <name> --tool -d "<what it does>"
Naming conventions:
- Use underscores:
fetch_stock,parse_csv,generate_pdf - Be specific:
fetch_stocknotdata_fetcher - Names are sanitized:
web-scraper→web_scraperautomatically
Writing searchable descriptions:
Descriptions are indexed for semantic search (raw search). Write them for discoverability.
Structure: [Action verb] [what] [from/to where] [key capabilities]
Good examples:
Fetch real-time stock prices, historical data, and dividends from Yahoo Finance API
Parse CSV files with automatic type detection, header handling, and encoding support
Generate PDF reports from structured data with charts, tables, and custom styling
Rules:
- Start with action verb: Fetch, Send, Generate, Convert, Parse, Validate, Scrape
- Include domain keywords that users might search for
- Mention data sources/destinations: from Yahoo Finance, to S3, via SMTP
- List key capabilities: retry logic, caching, pagination, HTML support
- Avoid: "This tool...", "A utility for...", "Used to..."
Step 3: Implement tool.py
Write the implementation at tools/<name>/tool.py:
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# dependencies = [
# # ADD REQUIRED DEPENDENCIES
# ]
# ///
"""<Tool description>"""
from typing import Any
def tool_name(
required_param: str,
optional_param: int = 10,
) -> dict:
"""<Tool description>
Args:
required_param: What this parameter is for
optional_param: What this does (default: 10)
Returns:
Dictionary with results
Raises:
ValueError: If inputs are invalid
"""
# === Input Validation ===
if not required_param:
raise ValueError("required_param cannot be empty")
# === Main Logic ===
# IMPLEMENT THE TOOL'S CORE FUNCTIONALITY
result = {"processed": True}
# === Return Results ===
return result
if __name__ == "__main__":
import argparse
import json
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--required-param", required=True)
parser.add_argument("--optional-param", type=int, default=10)
args = parser.parse_args()
result = tool_name(args.required_param, args.optional_param)
print(json.dumps(result, indent=2, default=str))
Step 4: Update init.py
The scaffold creates an __init__.py with a placeholder import. You must update it to export your actual functions:
"""<Tool description>."""
# Update this to match your implemented functions in tool.py
from .tool import my_function # Replace with your function names
__all__ = ["my_function"]
This enables imports like from tools.fetch_stock import fetch_stock.
Step 5: Write Tests
Create tests at tools/<name>/test.py:
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "pytest>=8.0",
# # SAME DEPS AS tool.py
# ]
# ///
"""Tests for <tool-name>."""
import pytest
from tool import tool_name
class TestToolName:
def test_basic_usage(self) -> None:
"""Test normal usage."""
result = tool_name("test_value")
assert "processed" in result
assert result["processed"] is True
def test_with_options(self) -> None:
"""Test with optional parameters."""
result = tool_name("test", optional_param=20)
assert result is not None
def test_invalid_input(self) -> None:
"""Test error handling."""
with pytest.raises(ValueError):
tool_name("")
if __name__ == "__main__":
pytest.main([__file__, "-v"])
Step 6: Run Tests
cd tools/<name>
uv run pytest test.py -v
ONLY tell the user the tool is ready if tests pass.
Step 7: Update config.yaml
Edit tools/<name>/config.yaml with accurate:
- inputs (name, type, required, description)
- outputs (name, type, description)
- dependencies
Step 8: Report to User
After tests pass:
Tool created and tested:
- Name: <tool_name>
- Location: tools/<name>/
- Usage: from tools.<name> import <function_name>
Decision Tree
User needs tool
│
├─► Search existing: `raw search "<capability>"`
│ EXISTS → Use or extend existing
│ NOT EXISTS → Continue
│
├─► Create scaffold: `raw create <name> --tool -d "..."`
│
├─► Implement tool.py
│ - Input validation
│ - Core logic
│ - CLI support
│
├─► Update __init__.py
│ - Export your implemented functions
│
├─► Write test.py
│ - Basic usage
│ - Edge cases
│ - Error handling
│
├─► Run tests: `uv run pytest test.py -v`
│ FAIL → Fix and retry
│ PASS → Continue
│
└─► Report success to user
Tool Types and Examples
Data Fetcher
def fetch_stock(ticker: str, period: str = "1mo") -> dict:
"""Fetch stock data from yfinance."""
import yfinance as yf
stock = yf.Ticker(ticker)
hist = stock.history(period=period)
return {
"ticker": ticker,
"dates": hist.index.strftime("%Y-%m-%d").tolist(),
"close": hist["Close"].tolist(),
"volume": hist["Volume"].tolist(),
}
Data Processor
def calculate_rsi(prices: list[float], period: int = 14) -> float:
"""Calculate RSI indicator."""
import pandas as pd
series = pd.Series(prices)
delta = series.diff()
gain = delta.where(delta > 0, 0).rolling(period).mean()
loss = (-delta.where(delta < 0, 0)).rolling(period).mean()
rs = gain / loss
return float(100 - (100 / (1 + rs.iloc[-1])))
File Generator
def generate_pdf(title: str, content: str, output_path: str) -> str:
"""Generate PDF report."""
from reportlab.lib.pagesizes import letter
from reportlab.pdfgen import canvas
c = canvas.Canvas(output_path, pagesize=letter)
c.drawString(100, 750, title)
c.drawString(100, 700, content)
c.save()
return output_path
Best Practices
Input Validation
def fetch_stock(ticker: str) -> dict:
if not ticker:
raise ValueError("Ticker cannot be empty")
if not ticker.isalpha():
raise ValueError(f"Invalid ticker format: {ticker}")
# Continue...
Error Handling
def fetch_api(url: str) -> dict:
try:
response = requests.get(url, timeout=30)
response.raise_for_status()
return {"success": True, "data": response.json()}
except requests.RequestException as e:
return {"success": False, "error": str(e)}
JSON-Serializable Returns
# Good - serializable
return {
"dates": dates_list, # list[str]
"values": values_list, # list[float]
}
# Bad - not serializable
return dataframe # pd.DataFrame
Validation Checklist
Before reporting success:
-
tool.pyexists and runs standalone -
__init__.pyexports public functions -
test.pyexists with tests - All tests pass
- config.yaml has accurate inputs/outputs
- Function has docstring with Args/Returns
Error Recovery
When things go wrong during tool creation:
Import Errors
ModuleNotFoundError: No module named 'requests'
Fix: Add to PEP 723 header in both tool.py and test.py:
# /// script
# dependencies = ["requests>=2.28"]
# ///
Test Failures
- Read the assertion error carefully
- Check if expected vs actual values make sense
- Verify mock data matches real API format
- Fix the code or update the test
Type Errors
TypeError: expected str, got NoneType
Fix: Add input validation at function start:
def fetch_data(url: str) -> dict:
if not url:
raise ValueError("url cannot be empty")
# ...
When Stuck
If you cannot resolve an error after 2 attempts:
- Explain what's failing clearly
- Show the error and your attempted fixes
- Ask the user how they'd like to proceed
Common Pitfalls
| Pitfall | Problem | Solution |
|---|---|---|
| No input validation | Cryptic errors downstream | Validate all inputs at function start |
| Returning non-serializable types | Can't be used in workflows | Return dicts/lists, not DataFrames |
| Missing timeout | Hangs on unresponsive APIs | Always set timeout=30 |
| Catching all exceptions | Hides bugs | Catch specific exceptions only |
| No docstring | Users don't know how to use it | Always include Args/Returns docs |
| Side effects | Hard to test, unexpected behavior | Pure functions when possible |
| Hyphenated names | Python can't import the module | Use underscores: web_scraper |
| Missing init.py | Import fails in workflows | Export functions from init.py |
Testing Pitfalls
| Pitfall | Solution |
|---|---|
| Testing with real APIs | Use mocks or fixtures |
| No edge case tests | Test empty inputs, None, invalid values |
| Tests depend on order | Each test should be independent |
Progress Communication
Keep the user informed during tool creation:
During Implementation
Creating fetch_stock tool...
✓ Created tool scaffold
✓ Implementing main function
✓ Adding input validation
✓ Updating __init__.py with exports
✓ Writing tests
⏳ Running tests...
After Completion
✓ Tool created and tested successfully!
Name: fetch_stock
Location: tools/fetch_stock/
Usage:
from tools.fetch_stock import fetch_stock
data = fetch_stock("TSLA", period="3mo")
3 tests passed ✓
On Test Failure
✗ Tool tests failed
FAILED test.py::TestFetchStock::test_invalid_ticker
AssertionError: Expected ValueError for empty ticker
The test expects the function to raise ValueError for invalid input,
but currently it doesn't validate the ticker parameter.
Would you like me to add input validation?
Security Checklist
Before delivering any tool:
- No hardcoded secrets - Use environment variables
- Input validation - Check all parameters before use
- No arbitrary code execution - Never use eval/exec on inputs
- Safe file operations - Validate paths, prevent traversal
- Timeout on network calls - Prevent indefinite hangs
Secure API Key Access
import os
def fetch_data(ticker: str) -> dict:
api_key = os.environ.get("API_KEY")
if not api_key:
raise ValueError("API_KEY not set in environment")
# Use api_key safely...
Input Sanitization
def process_file(filepath: str) -> dict:
# Prevent path traversal attacks
safe_path = Path(filepath).resolve()
if not safe_path.is_relative_to(Path.cwd()):
raise ValueError("Invalid file path")
return safe_path.read_text()