--- name: Execution Workflow description: This skill should be used when the user asks to "execute task", "implement feature", "delegate work", "run workflow", "review code", "code quality check", or needs task orchestration and code review guidance. Provides execution, delegation, and code review patterns. --- Provide structured workflow for task execution through delegation to specialized sub-agents, and comprehensive code review standards. Specialized sub-agents: quality_assurance (quality, security - parallel), implementation (test, refactor, docs - parallel if independent), review (sequential after implementation) Provide scope, file paths, Serena/Context7 tool instructions, reference implementations, memory checks Coding: Codex MCP → Serena MCP → Context7 → Basic tools; Non-coding: Serena MCP → Context7 → Basic tools Execute independent tasks concurrently; quality+security can run in parallel, test+docs can run in parallel when independent Tasks with data dependencies must run in order; verify outputs before dependent tasks start Sub-agents need: specific scope, file paths, tool usage instructions, reference implementations, memory patterns Four phases: Initial scan (syntax), Deep analysis (logic), Context evaluation (impact), Standards compliance (naming/docs) Systematic code review process Has code been modified or newly created? Apply code review phases systematically to ensure quality Skip review and proceed to next task Phase 1 - Initial Scan: - Syntax errors and typos - Missing imports or dependencies - Obvious logic errors - Code style violations Phase 2 - Deep Analysis: - Algorithm correctness - Edge case handling - Error handling completeness - Resource management Phase 3 - Context Evaluation: - Breaking changes to public APIs - Side effects on existing functionality - Dependency compatibility Phase 4 - Standards Compliance: - Naming conventions - Documentation requirements - Test coverage Evaluation criteria for code quality Is this a code review or quality assessment task? Apply quality criteria across all dimensions Focus on implementation patterns instead Correctness: - Logic matches requirements - Edge cases handled - Error conditions covered Security: - Input validation - Authentication/authorization - Data sanitization - Secrets handling Performance: - Algorithm efficiency - Resource usage - Memory leaks - N+1 queries Maintainability: - Clear naming - Appropriate comments - Single responsibility - DRY principle Testability: - Test coverage adequate - Tests meaningful - Edge cases tested Categorization of review feedback by priority Have you identified issues during code review? Apply feedback categories to prioritize by severity Continue code review phases Critical: Must fix before merge - Security vulnerabilities - Data corruption risks - Breaking changes Important: Should fix before merge - Logic errors - Missing error handling - Performance issues Suggestion: Nice to have improvements - Code style - Refactoring opportunities - Documentation Positive: What was done well - Good patterns - Clever solutions - Thorough testing Standard format for code review results Is it time to communicate code review findings? Apply review output format for structured communication Continue analyzing code through review phases Overall assessment and recommendation Must-fix items with file:line references Should-fix items Optional improvements Good practices observed Clarifications needed Focusing on code style issues when functionality is broken Address critical and important issues first, style suggestions last Approving changes without thorough review Systematically review all phases: scan, deep analysis, context, standards Providing only critical feedback without acknowledging good work Balance feedback with positive observations of good practices Giving feedback without specific, actionable suggestions Provide file:line references and concrete improvement suggestions Executing independent tasks sequentially Identify and execute independent tasks in parallel for efficiency Attempting to parallelize tasks with data dependencies Analyze dependencies and execute dependent tasks sequentially Understand requirements and identify scope Parse task description for key objectives Identify affected files and components Check Serena memories for existing patterns Split into manageable units Identify atomic tasks Estimate complexity of each task Assign to appropriate sub-agents Identify parallel vs sequential execution Map task dependencies Group independent tasks for parallel execution Order dependent tasks sequentially Assign to sub-agents with detailed instructions Provide specific scope and expected deliverables Include target file paths Specify MCP tool usage instructions Reference existing implementations Verify and combine results Review sub-agent outputs Resolve conflicts between outputs Ensure consistency across changes Analyze task dependencies before execution to determine parallel vs sequential execution model Provide comprehensive context to sub-agents including file paths, tool usage, and reference implementations Systematically review all phases: initial scan, deep analysis, context evaluation, standards compliance Balance critical feedback with positive observations of good practices Provide file:line references and concrete improvement suggestions Check Serena memories for existing patterns before delegating implementation tasks Execute independent tasks in parallel Never parallelize tasks with data dependencies Verify sub-agent outputs before integration Run quality checks after changes quality + security: Concurrent checks test + docs: Simultaneous creation when independent Ensure no regression in existing functionality Confirm all acceptance criteria met Sub-agent returns partial results Note in report, proceed Sub-agent task fails Document issue, use AskUserQuestion for clarification Critical task cannot be completed STOP, present options to user Sub-agent introduces breaking change BLOCK operation, require explicit user acknowledgment Primary agent for implementing features with sub-agent delegation Use for post-implementation code review and quality assessment Delegate debugging tasks when critical issues are identified during review Use for memory checks and symbol operations during delegation Use when code review reveals unclear implementation details Use to verify test coverage and quality during review Delegate detailed work to sub-agents Execute independent tasks in parallel Verify outputs before integration Implementing detailed logic directly Sequential execution of independent tasks Skipping verification of sub-agent outputs