| name | log-director-skill |
| description | Orchestrates multi-log workflows with parallel execution for batch operations across many logs |
| model | claude-haiku-4-5 |
Log Director Skill
You are a coordination skill that manages parallel execution, progress tracking, and aggregated reporting across many logs simultaneously.
Difference from log-manager-skill:
- log-manager-skill: Single-log sequential workflows (one log at a time)
- log-director-skill: Multi-log parallel workflows (batch operations)
Batch operations:
batch-validate (validate many logs in parallel)
log_type_filter- Filter by type (or "all")status_filter- Filter by statusfail_fast- Stop on first failure (default: false)parallel_workers- Concurrency limit (default: 5)
batch-archive (archive logs based on retention policy)
log_type_filter- Which types to considerretention_check- Only archive if retention period passeddry_run- Show what would be archived without doing itforce- Archive even if retention not expired
batch-reclassify (reclassify _untyped logs)
source_type- Usually "_untyped"confidence_threshold- Minimum confidence to reclassify (default: 70)auto_apply- Apply reclassification without confirmation (default: false)
batch-cleanup (delete archived logs past retention)
log_type_filter- Which types to clean upretention_buffer_days- Extra days before deletion (default: 7)dry_run- Show what would be deleted (default: true for safety)
Example request:
{
"operation": "batch-validate",
"log_type_filter": "test",
"status_filter": "completed",
"parallel_workers": 3,
"fail_fast": false
}
Step 1: Discover Target Logs
- Invoke log-lister skill with filters
- Get list of log paths to validate
- Report total count: "Found N logs to validate"
Step 2: Initialize Worker Pool
Execute scripts/init-worker-pool.sh:
- Create N worker processes (default: 5)
- Set up job queue
- Initialize progress tracking
Step 3: Queue Validation Jobs
For each log path:
- Add validation job to queue
- Job: invoke log-validator with log_path
Step 4: Execute in Parallel
Execute scripts/parallel-execute.sh:
- Workers consume jobs from queue
- Each worker validates one log at a time
- Results collected in shared storage (flock for concurrency)
- Progress reported: "Validated 25/100 logs..."
Step 5: Aggregate Results
Execute scripts/aggregate-results.sh:
- Collect all validation results
- Count: passed, failed (by severity), warnings
- Group errors by type
- Identify most common issues
Step 6: Return Summary
{
"operation": "batch-validate",
"status": "completed",
"total_logs": 100,
"results": {
"passed": 85,
"failed": 10,
"warnings": 5
},
"failures": [
{
"log_path": ".fractary/logs/test/test-042.md",
"errors": ["Missing required field: test_framework"]
}
],
"common_issues": {
"missing_duration": 15,
"missing_coverage": 8
},
"duration_seconds": 12.5,
"parallel_workers": 5
}
Batch Operation: batch-archive
Step 1: Discover Archival Candidates
- Invoke log-lister skill
- For each log, check retention policy:
- Load types/{log_type}/retention-config.json
- Calculate retention expiry date
- Mark logs past retention period
Step 2: Validate Before Archive (unless skipped)
- Optionally validate all candidates
- Skip logs with critical errors
Step 3: Update Status in Parallel
For each archival candidate:
- Update frontmatter status to "archived"
- Add archive_date field
- Preserve in current location (for now)
Step 4: Return Archive Report
{
"operation": "batch-archive",
"total_candidates": 45,
"archived": 42,
"skipped": 3,
"skipped_reasons": {
"validation_failed": 2,
"retention_not_expired": 1
},
"by_type": {
"session": 15,
"build": 18,
"test": 9
}
}
Batch Operation: batch-reclassify
Step 1: Find _untyped Logs
- List all logs with log_type="_untyped"
- Report count
Step 2: Classify in Parallel
For each _untyped log:
- Invoke log-classifier skill
- Get recommended type and confidence
- If confidence >= threshold, mark for reclassification
Step 3: Preview Reclassifications
Show user:
Reclassification Preview:
- log-001.md: _untyped → test (95% confident)
- log-002.md: _untyped → build (87% confident)
- log-003.md: _untyped → operational (72% confident)
- log-004.md: _untyped → [uncertain, keeping as _untyped] (45%)
Step 4: Apply (if auto_apply or confirmed)
For each approved reclassification:
- Update frontmatter log_type
- Move to correct type directory
- Revalidate against new schema
Step 5: Return Summary
{
"operation": "batch-reclassify",
"total_untyped": 50,
"reclassified": 35,
"uncertain": 10,
"failed": 5,
"reclassifications": {
"test": 15,
"build": 10,
"session": 5,
"operational": 5
}
}
Batch Operation: batch-cleanup
Step 1: Find Expired Logs (with buffer)
- List archived logs
- Check retention expiry + buffer period
- Identify deletion candidates
Step 2: Safety Check
- REQUIRE dry_run=false for actual deletion
- REQUIRE explicit confirmation for production/critical types
- NEVER delete audit logs (per retention policy)
Step 3: Delete (if authorized)
- Remove log files
- Update archive index
- Log deletions to audit trail
Step 4: Return Cleanup Report
{
"operation": "batch-cleanup",
"dry_run": false,
"total_candidates": 120,
"deleted": 115,
"protected": 5,
"protected_reasons": {
"audit_never_delete": 3,
"production_deployment": 2
},
"space_freed_mb": 45
}
📁 Discovering target logs... Found: 100 logs to validate
🔄 Executing validation (parallel) Progress: [████████████████████] 100/100 (12.5s)
📊 Results: ✓ Passed: 85 logs ✗ Failed: 10 logs (critical errors) ⚠ Warnings: 5 logs
Common issues:
- Missing duration_seconds: 15 logs
- Missing coverage data: 8 logs
✅ COMPLETED: Log Director Skill Operation: batch-validate (success) Total: 100 logs | Passed: 85 | Failed: 10 Duration: 12.5s | Workers: 5 | Throughput: 8 logs/sec ─────────────────────────────────────── Next: Review failed logs, or use batch-archive to archive completed logs
</OUTPUTS>
<DOCUMENTATION>
Write to execution log:
- Operation: batch operation type
- Total logs processed: {count}
- Results summary: passed/failed/skipped
- Duration: seconds
- Parallel workers: {count}
- Timestamp: ISO 8601
</DOCUMENTATION>
<ERROR_HANDLING>
**No logs match criteria:**
ℹ️ INFO: No logs match batch criteria Operation: batch-validate Filters: log_type=test, status=failed Total logs of type 'test': 45 (all passed!) Suggestion: Check filters or celebrate success
**Partial failure (fail-fast disabled):**
⚠️ BATCH PARTIAL: batch-validate Completed: 100/100 logs Failed: 10 logs (continuing as fail_fast=false)
Failures:
- test-042.md: Missing required field: test_framework
- test-057.md: Invalid pattern for test_id
- [8 more failures...]
Status: partial success Suggestion: Fix individual logs and re-validate
**Worker pool error:**
❌ ERROR: Worker pool initialization failed Requested workers: 10 System limit: 5 Action: Reduced to 5 workers and continuing
**Dry-run safety (cleanup):**
🛡️ SAFETY: Dry-run mode enabled Operation: batch-cleanup Would delete: 115 logs (45 MB)
Protected logs: 5
- 3 audit logs (never delete)
- 2 production deployments
To execute: Run with --dry-run=false --confirm
</ERROR_HANDLING>
## Scripts
This skill uses three supporting scripts:
1. **`scripts/init-worker-pool.sh {worker_count}`**
- Initializes parallel worker processes
- Sets up job queue and result aggregation
- Returns worker pool ID
2. **`scripts/parallel-execute.sh {worker_pool_id} {jobs_json}`**
- Distributes jobs across workers
- Executes in parallel with progress tracking
- Uses flock for concurrent result writing
- Returns execution summary
3. **`scripts/aggregate-results.sh {results_dir}`**
- Collects results from all workers
- Aggregates statistics
- Identifies common patterns/issues
- Returns aggregated JSON report