Claude Code Plugins

Community-maintained marketplace

Feedback

Execution Workflow

@mtaku3/nix-config
2
0

This skill should be used when the user asks to "execute task", "implement feature", "delegate work", "run workflow", "review code", "code quality check", or needs task orchestration and code review guidance. Provides execution, delegation, and code review patterns.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name Execution Workflow
description This skill should be used when the user asks to "execute task", "implement feature", "delegate work", "run workflow", "review code", "code quality check", or needs task orchestration and code review guidance. Provides execution, delegation, and code review patterns.
version 0.2.0
Provide structured workflow for task execution through delegation to specialized sub-agents, and comprehensive code review standards. Understand requirements and identify scope Parse task description for key objectives Identify affected files and components Check Serena memories for existing patterns Split into manageable units Identify atomic tasks Estimate complexity of each task Assign to appropriate sub-agents Identify parallel vs sequential execution Map task dependencies Group independent tasks for parallel execution Order dependent tasks sequentially Assign to sub-agents with detailed instructions Provide specific scope and expected deliverables Include target file paths Specify MCP tool usage instructions Reference existing implementations Verify and combine results Review sub-agent outputs Resolve conflicts between outputs Ensure consistency across changes Specialized sub-agents organized by execution model Syntax, type, format verification Vulnerability detection Test creation, coverage Refactoring, tech debt Documentation updates Post-implementation review Essential information to provide when delegating to sub-agents - Specific scope and expected deliverables - Target file paths - Serena MCP usage: find_symbol, get_symbols_overview, search_for_pattern - Context7 MCP usage for library verification - Reference implementations with specific paths - Memory check: list_memories for patterns Tool selection hierarchy for task execution For coding tasks (generation/modification/review): Priority 1: Codex MCP (sandbox: workspace-write, approval-policy: on-failure) Priority 2: Serena MCP for symbol operations and memory Priority 3: Context7 for library documentation Priority 4: Basic tools (Read/Edit/Write) as fallback

For non-coding tasks (research/analysis): Priority 1: Serena MCP for symbol search and memory Priority 2: Context7 for library documentation Priority 3: Basic tools (Read/Edit/Write)

Codex MCP configuration:

  • sandbox: workspace-write (allows code generation/modification)
  • approval-policy: on-failure (auto-approve commands when execution fails)

Prohibited Codex usage:

  • Research/analysis - use Explore agent, Serena MCP
  • Documentation generation - use docs agent

Allowed Codex usage:

  • Code generation (new files/functions)
  • Code modification (editing/refactoring)
  • Code review and quality analysis
  • Test code generation
  • Performance optimization suggestions
Systematic code review process Phase 1 - Initial Scan: - Syntax errors and typos - Missing imports or dependencies - Obvious logic errors - Code style violations

Phase 2 - Deep Analysis:

  • Algorithm correctness
  • Edge case handling
  • Error handling completeness
  • Resource management

Phase 3 - Context Evaluation:

  • Breaking changes to public APIs
  • Side effects on existing functionality
  • Dependency compatibility

Phase 4 - Standards Compliance:

  • Naming conventions
  • Documentation requirements
  • Test coverage
Evaluation criteria for code quality Correctness: - Logic matches requirements - Edge cases handled - Error conditions covered

Security:

  • Input validation
  • Authentication/authorization
  • Data sanitization
  • Secrets handling

Performance:

  • Algorithm efficiency
  • Resource usage
  • Memory leaks
  • N+1 queries

Maintainability:

  • Clear naming
  • Appropriate comments
  • Single responsibility
  • DRY principle

Testability:

  • Test coverage adequate
  • Tests meaningful
  • Edge cases tested
Categorization of review feedback by priority Critical: Must fix before merge - Security vulnerabilities - Data corruption risks - Breaking changes

Important: Should fix before merge

  • Logic errors
  • Missing error handling
  • Performance issues

Suggestion: Nice to have improvements

  • Code style
  • Refactoring opportunities
  • Documentation

Positive: What was done well

  • Good patterns
  • Clever solutions
  • Thorough testing
Standard format for code review results Overall assessment and recommendation Must-fix items with file:line references Should-fix items Optional improvements Good practices observed Clarifications needed
Execute independent tasks in parallel Never parallelize tasks with data dependencies Verify sub-agent outputs before integration Run quality checks after changes quality + security: Concurrent checks test + docs: Simultaneous creation when independent Ensure no regression in existing functionality Confirm all acceptance criteria met Analyze task dependencies before execution to determine parallel vs sequential execution model Provide comprehensive context to sub-agents including file paths, tool usage, and reference implementations Systematically review all phases: initial scan, deep analysis, context evaluation, standards compliance Balance critical feedback with positive observations of good practices Provide file:line references and concrete improvement suggestions Check Serena memories for existing patterns before delegating implementation tasks Focusing on code style issues when functionality is broken Address critical and important issues first, style suggestions last Approving changes without thorough review Systematically review all phases: scan, deep analysis, context, standards Providing only critical feedback without acknowledging good work Balance feedback with positive observations of good practices Giving feedback without specific, actionable suggestions Provide file:line references and concrete improvement suggestions Executing independent tasks sequentially Identify and execute independent tasks in parallel for efficiency Attempting to parallelize tasks with data dependencies Analyze dependencies and execute dependent tasks sequentially