Claude Code Plugins

Community-maintained marketplace

Feedback

parallel-arch-review

@lprior-repo/my-claude-config
0
0

This skill should be used when the user asks to "review architecture", "analyze design", "run parallel review", "multi-agent review", "5-perspective review", or when coordinating 24+ agents on code changes. Enforces atomic task decomposition with 5-lens review protocol.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name parallel-arch-review
description This skill should be used when the user asks to "review architecture", "analyze design", "run parallel review", "multi-agent review", "5-perspective review", or when coordinating 24+ agents on code changes. Enforces atomic task decomposition with 5-lens review protocol.

Parallel Architecture Review

A multi-agent review protocol that decomposes work into atomic units and applies 5 orthogonal review perspectives before any code is merged.

Core Principle: Atomic Vertical Slices

Each agent owns ONE vertical slice (single file or tightly-coupled pair). No agent touches another's slice until explicit handoff.

Atomic Step Definition

The smallest possible change is ONE of:

  • Add single type definition
  • Add single function signature (stub)
  • Implement single function body
  • Add single test case
  • Fix single failing test

If a change spans multiple of these, decompose further.

The 5-Lens Review Protocol

Every atomic task passes through 5 independent review perspectives before completion. Each lens asks different questions:

Lens 1: TYPE SAFETY

  • Are types maximally precise (no any, no stringly-typed)?
  • Do custom types encode domain constraints?
  • Are impossible states unrepresentable?
  • Is Option used for absence, Result for failure?

Lens 2: ERROR PATHS

  • What can fail in this code?
  • Is every error case explicitly handled?
  • Are errors propagated or swallowed?
  • Do error messages enable debugging?

Lens 3: EDGE CASES

  • What happens with empty input?
  • What happens at boundaries (0, max, overflow)?
  • What about concurrent access?
  • What if dependencies are unavailable?

Lens 4: INTEGRATION

  • Does this fit existing patterns in the codebase?
  • Are naming conventions followed?
  • Does the API match sibling modules?
  • Will this break existing consumers?

Lens 5: SIMPLICITY

  • Is this the simplest solution that works?
  • Does any abstraction appear 3+ times (DRY threshold)?
  • Can any code be deleted?
  • Is the "why" documented if non-obvious?

Parallel Agent Protocol

Task Claiming

1. Agent checks `bd ready` for available tasks
2. Agent claims task: `bd update <id> --status in_progress`
3. Agent reserves files via Agent Mail: `/reserve <pattern>`
4. No other agent may touch reserved files

Review Rotation

For each atomic task:
  Agent A: Implements (owns the slice)
  Agent B: Reviews Lens 1 (Type Safety)
  Agent C: Reviews Lens 2 (Error Paths)
  Agent D: Reviews Lens 3 (Edge Cases)
  Agent E: Reviews Lens 4 (Integration)
  Agent F: Reviews Lens 5 (Simplicity)

With 24 agents, 4 parallel task streams run simultaneously (6 agents per stream).

Handoff Protocol

1. Implementer completes atomic change
2. Creates child Beads for each lens review
3. Reviewers claim their lens task
4. Each reviewer either:
   - PASS: Closes their review task
   - BLOCK: Creates blocker task, links dependency
5. Original task closes only when all 5 lenses pass

Conflict Prevention

File Ownership Rules

  • Beads task ID = file ownership scope
  • One task, one file (or tightly-coupled pair)
  • Shared utilities require explicit coordination task
  • No shared abstractions until proven needed (3x rule)

The 3x Rule for Abstraction

Do NOT create shared code until:

  1. Pattern appears in 3 different files
  2. Each instance is owned by different agent
  3. Coordination task explicitly created for extraction

Beads Integration

Task Structure

Parent Task: Feature/Epic
├── Slice A: handler/foo.gleam (Agent 1 owns)
│   ├── Type definitions
│   ├── Function stubs
│   ├── Implementation
│   └── 5x Review tasks (Agents 2-6)
├── Slice B: service/bar.gleam (Agent 7 owns)
│   └── ...
└── Integration tests (after all slices pass)

Task Naming Convention

[prefix]-[slice]-[step]
Example: meal-planner-recipe-types
Example: meal-planner-recipe-impl
Example: meal-planner-recipe-lens1-types

TCR at Atomic Level

Every atomic step follows TCR:

gleam test && git commit -m "PASS: [task-id] description" || git reset --hard

If test fails, revert immediately. Try different approach. Do not debug in place.

Additional Resources

Reference Files

  • references/perspectives.md - Deep dive on each lens with examples
  • references/decomposition.md - How to break features into atoms

Examples

  • examples/atomic-task-breakdown.md - Real feature decomposition