Claude Code Plugins

Community-maintained marketplace

Feedback

Access and interact with Large Language Models from the command line using Simon Willison's llm CLI tool. Supports OpenAI, Anthropic, Gemini, Llama, and dozens of other models via plugins. Features include chat sessions, embeddings, structured data extraction with schemas, prompt templates, conversation logging, and tool use. This skill is triggered when the user says things like "run a prompt with llm", "use the llm command", "call an LLM from the command line", "set up llm API keys", "install llm plugins", "create embeddings", or "extract structured data from text".

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name llm
description Access and interact with Large Language Models from the command line using Simon Willison's llm CLI tool. Supports OpenAI, Anthropic, Gemini, Llama, and dozens of other models via plugins. Features include chat sessions, embeddings, structured data extraction with schemas, prompt templates, conversation logging, and tool use. This skill is triggered when the user says things like "run a prompt with llm", "use the llm command", "call an LLM from the command line", "set up llm API keys", "install llm plugins", "create embeddings", or "extract structured data from text".

LLM CLI Tool Skill

A CLI tool and Python library for interacting with Large Language Models including OpenAI, Anthropic's Claude, Google's Gemini, Meta's Llama, and dozens of others via remote APIs or locally installed models.

When to Use This Skill

Use this skill when:

  • Running prompts against LLMs from the command line
  • Managing conversations and chat sessions
  • Working with embeddings for semantic search
  • Extracting structured data using schemas
  • Installing and configuring LLM plugins
  • Managing API keys for various providers
  • Using templates for reusable prompts
  • Logging and analyzing LLM interactions

Quick Reference

Basic Commands

# Run a prompt
llm "Your prompt here"

# Use a specific model
llm -m claude-4-opus "Your prompt"

# Chat mode
llm chat -m gpt-4.1

# With attachments (images, audio, video)
llm "describe this" -a image.jpg

# Pipe content
cat file.py | llm -s "Explain this code"

Key Management

llm keys set openai
llm keys set anthropic
llm keys set gemini

Plugin Management

llm install llm-anthropic
llm install llm-gemini
llm install llm-ollama
llm plugins

Documentation Index

Core Documentation

  • README.md - Project overview and quick start guide
  • docs/setup.md - Installation and initial configuration
  • docs/usage.md - Comprehensive CLI usage guide (prompts, chat, attachments, conversations)
  • docs/help.md - Complete command reference and help text

Model Configuration

Advanced Features

Embeddings

Plugins

Python API & Development

Reference

Common Workflows

Starting a Conversation

# Start chat with context
llm chat -m gpt-4.1 -s "You are a helpful coding assistant"

# Continue a previous conversation
llm -c "Follow up question"

Working with Files

# Analyze code
cat script.py | llm "Review this code for bugs"

# Process multiple files
cat *.md | llm "Summarize these documents"

Structured Output

# Extract data with schema
llm -m gpt-4.1 "Extract person info" -a photo.jpg --schema name,age,occupation

Template Usage

# List templates
llm templates

# Use a template
llm -t summarize < article.txt