| name | log-analytics |
| description | Generate and execute Python code to analyze large log datasets, detect patterns, and extract actionable insights |
Log Analytics Skill
Purpose: Generate and execute Python code to analyze large log datasets, detect patterns, and extract actionable insights.
When to Use: When you need to analyze 500+ log entries, detect error patterns, calculate statistics, or perform time-series analysis on log data.
🚨 CRITICAL SECURITY RULE:
ALL file paths MUST be relative to project directory and start with analytics/
NEVER use /tmp/, /private/tmp/, or any paths outside the project workspace
🎯 Skill Overview
This skill guides you through:
- Fetching raw log data (1000+ entries)
- Generating Python analysis code tailored to the data structure
- Executing the code and interpreting results
CRITICAL: This skill uses progressive disclosure. You MUST read phase files in order.
🚀 Workflow
MANDATORY FIRST STEP:
Before using any tools, use the Read tool to read:
.claude/skills/log-analytics/phases/data-fetch.md
This file contains Phase 1 instructions and tells you which file to read next.
DO NOT proceed with tool calls until you've read Phase 1.
The complete workflow consists of 3 phases:
- Data Fetch (1-2 min) →
phases/data-fetch.md - Code Generation (2-3 min) →
phases/code-generation.md - Analysis Execution (1-2 min) →
phases/analysis-execution.md
Each phase file contains a "Next Step" section directing you to the next phase.
🔑 Key Principles
Progressive Disclosure: Phase files reveal detailed instructions progressively. Read each phase file in sequence - do not skip ahead or assume you know what to do.
Dynamic Code Generation: Generate Python code based on the ACTUAL log structure returned. Don't use generic templates.
Structured Output: Always provide analysis results in JSON format with counts, percentages, and trends.
Save Your Work: Save generated scripts to analytics/ directory for reuse and auditing.
📊 Expected Outputs
By the end of this skill execution, you will have:
- Raw log data saved to
analytics/incident_logs.json - Python analysis script saved to
analytics/parse_logs_[timestamp].py - Analysis results in JSON format showing:
- Error counts by type
- Time-based error distribution
- Service-level breakdown
- Performance metrics (p95, p99)
- Detected anomalies
🔗 Integration
This skill can be invoked by other skills (e.g., incident-analysis) when they need deep log analysis.
From incident-analysis skill:
When log data exceeds 500 entries, invoke the log-analytics skill:
Use Skill tool → "log-analytics"
📁 MCP Tools Used
This skill requires the log-analytics-server MCP server, which provides:
get_raw_logs(incident_id, timeframe)- Fetch large log datasetsexecute_analysis_script(script_path)- Run generated Python code
Ready to begin?
Use the Read tool to read: .claude/skills/log-analytics/phases/data-fetch.md