Claude Code Plugins

Community-maintained marketplace

Feedback
0
0

Generate and execute Python code to analyze large log datasets, detect patterns, and extract actionable insights

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name log-analytics
description Generate and execute Python code to analyze large log datasets, detect patterns, and extract actionable insights

Log Analytics Skill

Purpose: Generate and execute Python code to analyze large log datasets, detect patterns, and extract actionable insights.

When to Use: When you need to analyze 500+ log entries, detect error patterns, calculate statistics, or perform time-series analysis on log data.

🚨 CRITICAL SECURITY RULE: ALL file paths MUST be relative to project directory and start with analytics/ NEVER use /tmp/, /private/tmp/, or any paths outside the project workspace


🎯 Skill Overview

This skill guides you through:

  1. Fetching raw log data (1000+ entries)
  2. Generating Python analysis code tailored to the data structure
  3. Executing the code and interpreting results

CRITICAL: This skill uses progressive disclosure. You MUST read phase files in order.


🚀 Workflow

MANDATORY FIRST STEP:

Before using any tools, use the Read tool to read:

.claude/skills/log-analytics/phases/data-fetch.md

This file contains Phase 1 instructions and tells you which file to read next.

DO NOT proceed with tool calls until you've read Phase 1.

The complete workflow consists of 3 phases:

  1. Data Fetch (1-2 min) → phases/data-fetch.md
  2. Code Generation (2-3 min) → phases/code-generation.md
  3. Analysis Execution (1-2 min) → phases/analysis-execution.md

Each phase file contains a "Next Step" section directing you to the next phase.


🔑 Key Principles

Progressive Disclosure: Phase files reveal detailed instructions progressively. Read each phase file in sequence - do not skip ahead or assume you know what to do.

Dynamic Code Generation: Generate Python code based on the ACTUAL log structure returned. Don't use generic templates.

Structured Output: Always provide analysis results in JSON format with counts, percentages, and trends.

Save Your Work: Save generated scripts to analytics/ directory for reuse and auditing.


📊 Expected Outputs

By the end of this skill execution, you will have:

  1. Raw log data saved to analytics/incident_logs.json
  2. Python analysis script saved to analytics/parse_logs_[timestamp].py
  3. Analysis results in JSON format showing:
    • Error counts by type
    • Time-based error distribution
    • Service-level breakdown
    • Performance metrics (p95, p99)
    • Detected anomalies

🔗 Integration

This skill can be invoked by other skills (e.g., incident-analysis) when they need deep log analysis.

From incident-analysis skill:

When log data exceeds 500 entries, invoke the log-analytics skill:
Use Skill tool → "log-analytics"

📁 MCP Tools Used

This skill requires the log-analytics-server MCP server, which provides:

  • get_raw_logs(incident_id, timeframe) - Fetch large log datasets
  • execute_analysis_script(script_path) - Run generated Python code

Ready to begin?

Use the Read tool to read: .claude/skills/log-analytics/phases/data-fetch.md