Claude Code Plugins

Community-maintained marketplace

Feedback

structured-output

@juanre/llmring
3
0

Use when extracting structured data from LLMs, parsing JSON responses, or enforcing output schemas - unified JSON schema API works across OpenAI, Anthropic, Google, and Ollama with automatic validation and parsing

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name structured-output
description Use when extracting structured data from LLMs, parsing JSON responses, or enforcing output schemas - unified JSON schema API works across OpenAI, Anthropic, Google, and Ollama with automatic validation and parsing

Structured Output with JSON Schema

Installation

# With uv (recommended)
uv add llmring

# With pip
pip install llmring

Provider SDKs (install what you need):

uv add openai>=1.0      # OpenAI
uv add anthropic>=0.67   # Anthropic
uv add google-genai      # Google Gemini
uv add ollama>=0.4       # Ollama

API Overview

This skill covers:

  • response_format parameter in LLMRequest
  • JSON Schema definition
  • strict mode for validation
  • parsed field in LLMResponse
  • Cross-provider compatibility

Quick Start

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    # Define JSON schema
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(role="user", content="Generate a person")],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "person",
                "schema": {
                    "type": "object",
                    "properties": {
                        "name": {"type": "string"},
                        "age": {"type": "integer"},
                        "email": {"type": "string"}
                    },
                    "required": ["name", "age"]
                }
            },
            "strict": True
        }
    )

    response = await service.chat(request)
    print("JSON string:", response.content)
    print("Parsed data:", response.parsed)  # Python dict

Complete API Documentation

response_format Parameter

The response_format parameter controls structured output.

Structure:

response_format = {
    "type": "json_schema",
    "json_schema": {
        "name": str,           # Schema name
        "schema": dict         # JSON Schema definition
    },
    "strict": bool             # Optional: enforce strict validation (at response_format level)
}

Parameters:

  • type (str, required): Must be "json_schema"
  • json_schema (dict, required): Schema definition
    • name (str, required): Name for the schema
    • schema (dict, required): JSON Schema defining the structure
    • strict (bool, optional): If true, strictly enforce schema

Example:

from llmring import LLMRequest, Message

request = LLMRequest(
    model="extractor",  # Your alias for structured extraction
    messages=[Message(role="user", content="Generate data")],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "response",
            "schema": {
                "type": "object",
                "properties": {
                    "answer": {"type": "string"}
                },
                "required": ["answer"]
            }
        },
        "strict": True
    }
)

JSON Schema Format

JSON Schema defines the expected structure.

Basic Types:

# String
{"type": "string"}

# Number (integer or float)
{"type": "number"}

# Integer only
{"type": "integer"}

# Boolean
{"type": "boolean"}

# Array
{
    "type": "array",
    "items": {"type": "string"}  # Array of strings
}

# Object
{
    "type": "object",
    "properties": {
        "field1": {"type": "string"},
        "field2": {"type": "integer"}
    },
    "required": ["field1"]  # Required fields
}

Example Schemas:

# Person schema
person_schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer"},
        "email": {"type": "string"},
        "is_active": {"type": "boolean"}
    },
    "required": ["name", "age"]
}

# List of items schema
list_schema = {
    "type": "object",
    "properties": {
        "items": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "id": {"type": "integer"},
                    "title": {"type": "string"}
                }
            }
        }
    }
}

# Nested object schema
nested_schema = {
    "type": "object",
    "properties": {
        "user": {
            "type": "object",
            "properties": {
                "name": {"type": "string"},
                "address": {
                    "type": "object",
                    "properties": {
                        "street": {"type": "string"},
                        "city": {"type": "string"}
                    }
                }
            }
        }
    }
}

LLMResponse with Structured Output

When using response_format, the response contains both raw JSON and parsed data.

Attributes:

  • content (str): Raw JSON string
  • parsed (dict): Parsed Python dictionary (ready to use)
  • model (str): Model used
  • usage (dict): Token usage
  • finish_reason (str): Completion reason

Example:

response = await service.chat(request)

# Both available:
json_string = response.content    # '{"name": "Alice", "age": 30}'
data = response.parsed             # {"name": "Alice", "age": 30}

# Use parsed data directly
print(f"Name: {data['name']}")
print(f"Age: {data['age']}")

Common Patterns

Extracting Structured Data

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    # Extract contact info from text
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(
            role="user",
            content="Extract contact info: John Smith, age 35, email john@example.com"
        )],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "contact",
                "schema": {
                    "type": "object",
                    "properties": {
                        "name": {"type": "string"},
                        "age": {"type": "integer"},
                        "email": {"type": "string"}
                    },
                    "required": ["name"]
                }
            },
            "strict": True
        }
    )

    response = await service.chat(request)
    contact = response.parsed
    print(f"Name: {contact['name']}, Age: {contact['age']}")

Generating Lists

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(
            role="user",
            content="List 5 programming languages with their release years"
        )],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "languages",
                "schema": {
                    "type": "object",
                    "properties": {
                        "languages": {
                            "type": "array",
                            "items": {
                                "type": "object",
                                "properties": {
                                    "name": {"type": "string"},
                                    "year": {"type": "integer"}
                                },
                                "required": ["name", "year"]
                            }
                        }
                    },
                    "required": ["languages"]
                }
            }
        }
    )

    response = await service.chat(request)
    for lang in response.parsed["languages"]:
        print(f"{lang['name']}: {lang['year']}")

Classification Tasks

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(
            role="user",
            content="Classify sentiment: This product is amazing!"
        )],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "sentiment",
                "schema": {
                    "type": "object",
                    "properties": {
                        "sentiment": {
                            "type": "string",
                            "enum": ["positive", "negative", "neutral"]
                        },
                        "confidence": {
                            "type": "number",
                            "minimum": 0.0,
                            "maximum": 1.0
                        }
                    },
                    "required": ["sentiment", "confidence"]
                }
            },
            "strict": True
        }
    )

    response = await service.chat(request)
    result = response.parsed
    print(f"Sentiment: {result['sentiment']} ({result['confidence']:.2f})")

Nested Structures

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(
            role="user",
            content="Generate a blog post with title, author info, and tags"
        )],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "blog_post",
                "schema": {
                    "type": "object",
                    "properties": {
                        "title": {"type": "string"},
                        "author": {
                            "type": "object",
                            "properties": {
                                "name": {"type": "string"},
                                "email": {"type": "string"}
                            },
                            "required": ["name"]
                        },
                        "tags": {
                            "type": "array",
                            "items": {"type": "string"}
                        },
                        "content": {"type": "string"}
                    },
                    "required": ["title", "author", "content"]
                }
            }
        }
    )

    response = await service.chat(request)
    post = response.parsed
    print(f"Title: {post['title']}")
    print(f"By: {post['author']['name']}")
    print(f"Tags: {', '.join(post['tags'])}")

Validation with Enums

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    # Enforce specific values
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(
            role="user",
            content="What's the priority of this bug?"
        )],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "bug_priority",
                "schema": {
                    "type": "object",
                    "properties": {
                        "priority": {
                            "type": "string",
                            "enum": ["low", "medium", "high", "critical"]
                        },
                        "reasoning": {"type": "string"}
                    },
                    "required": ["priority"]
                }
            },
            "strict": True
        }
    )

    response = await service.chat(request)
    # priority is guaranteed to be one of the enum values

Streaming Structured Output

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(role="user", content="Generate JSON data")],
        response_format={
            "type": "json_schema",
            "json_schema": {
                "name": "data",
                "schema": {
                    "type": "object",
                    "properties": {
                        "result": {"type": "string"}
                    }
                }
            }
        }
    )

    # Stream JSON construction
    full_json = ""
    async for chunk in service.chat_stream(request):
        print(chunk.delta, end="", flush=True)
        full_json += chunk.delta

    # Parse final JSON
    import json
    data = json.loads(full_json)
    print(f"\nParsed: {data}")

Strict Mode

When strict: True, the schema is strictly enforced:

response_format = {
    "type": "json_schema",
    "json_schema": {
        "name": "data",
        "schema": {...}
    },
    "strict": True  # Strict validation at response_format level
}

Strict mode guarantees:

  • Output matches schema exactly
  • All required fields present
  • No extra fields
  • Correct types
  • Enum values respected

Without strict mode:

  • Best-effort schema following
  • May have extra fields
  • Types usually correct but not guaranteed

Provider Support

Provider JSON Schema Strict Mode Notes
OpenAI Yes Yes Native support
Anthropic Yes Yes Adapted automatically
Google Yes Yes Adapted automatically
Ollama Yes Best-effort Prompt-based adaptation

LLMRing automatically adapts JSON schema to each provider's format.

Common Mistakes

Wrong: Not Using Parsed Field

# DON'T DO THIS - manually parse JSON
import json
response = await service.chat(request)
data = json.loads(response.content)  # Unnecessary

Right: Use Parsed Field

# DO THIS - use pre-parsed data
response = await service.chat(request)
data = response.parsed  # Already a dict

Wrong: Missing Required Fields

# DON'T DO THIS - forgot to mark required fields
schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer"}
    }
    # No "required" field!
}

Right: Specify Required Fields

# DO THIS - mark required fields
schema = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer"}
    },
    "required": ["name", "age"]  # Both required
}

Wrong: Wrong Type Names

# DON'T DO THIS - invalid type
schema = {
    "type": "object",
    "properties": {
        "value": {"type": "int"}  # Wrong! Should be "integer"
    }
}

Right: Use Correct JSON Schema Types

# DO THIS - correct types
schema = {
    "type": "object",
    "properties": {
        "text": {"type": "string"},
        "count": {"type": "integer"},  # Not "int"
        "price": {"type": "number"},   # For floats
        "active": {"type": "boolean"}  # Not "bool"
    }
}

Wrong: Not Handling Errors

# DON'T DO THIS - assume parsing always works
response = await service.chat(request)
name = response.parsed["name"]  # May fail!

Right: Handle Missing Fields

# DO THIS - handle missing fields
response = await service.chat(request)
name = response.parsed.get("name", "Unknown")
if "age" in response.parsed:
    age = response.parsed["age"]

Combining with Tools

You can use structured output and tools together:

from llmring import LLMRing, LLMRequest, Message

async with LLMRing() as service:
    request = LLMRequest(
        model="extractor",  # Your alias for structured extraction
        messages=[Message(role="user", content="Analyze this data")],
        tools=[...],  # Define tools
        response_format={  # Also request structured output
            "type": "json_schema",
            "json_schema": {
                "name": "analysis",
                "schema": {
                    "type": "object",
                    "properties": {
                        "summary": {"type": "string"},
                        "score": {"type": "number"}
                    }
                }
            }
        }
    )

    response = await service.chat(request)

    # May have tool_calls OR parsed JSON
    if response.tool_calls:
        # Handle tool execution
        pass
    elif response.parsed:
        # Handle structured output
        print(response.parsed)

Best Practices

  1. Always specify required fields: Don't rely on optional fields
  2. Use enums for constrained values: Ensures valid outputs
  3. Use strict mode in production: Guarantees schema compliance
  4. Provide clear property descriptions: Helps model understand intent
  5. Start simple: Test with basic schemas before complex nested structures
  6. Use parsed field: Don't manually parse JSON strings
  7. Handle missing fields gracefully: Use .get() with defaults

Related Skills

  • llmring-chat - Basic chat without structured output
  • llmring-streaming - Stream structured JSON construction
  • llmring-tools - Combine with function calling
  • llmring-lockfile - Configure models for structured output
  • llmring-providers - Provider-specific schema features