Claude Code Plugins

Community-maintained marketplace

Feedback

Performance optimization with async patterns, caching, and connection pooling

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name performance
description Performance optimization with async patterns, caching, and connection pooling
license MIT
compatibility opencode
metadata [object Object]

What I do

  • Optimize code for performance using async patterns
  • Implement caching strategies (lru_cache, Redis)
  • Configure connection pooling for HTTP clients
  • Profile and measure performance improvements

When to use me

Use this when you need to:

  • Optimize slow API calls
  • Add caching to expensive operations
  • Configure connection pooling
  • Profile code performance

MCP-First Workflow

Always use MCP servers in this order:

  1. codebase - Search for performance patterns

    search_codebase("async performance patterns caching", top_k=10)
    
  2. filesystem - view_file the code to optimize

    read_file("src/module.py")
    
  3. git - Check for performance-related changes

    git_diff("HEAD~10..HEAD", path="src/")
    

Optimization Techniques

Async Patterns

# BEFORE (blocking)
def fetch_data(url):
    return requests.get(url).json()

# AFTER (async)
async def fetch_data(url: str) -> dict:
    async with httpx.AsyncClient() as client:
        return (await client.get(url)).json()

Caching

from functools import lru_cache

@lru_cache(maxsize=128)
def expensive_computation(input: str) -> dict:
    return result

Connection Pooling

async with httpx.AsyncClient(
    limits=httpx.Limits(max_keepalive_connections=5, max_connections=10)
) as client:
    pass

Common Optimizations

Issue Solution
Blocking I/O Convert to async with httpx.AsyncClient
Repeated computation Add @lru_cache or use Redis cache
N+1 queries Batch queries or use asyncio.gather()
Large data transfers Stream data, use pagination
Slow regex Compile patterns with re.compile()