--- name: impl description: Execute pending tasks for a feature — TDD-driven implementation with sub-agent isolation and progress tracking. --- # Code Forge — Impl Execute pending implementation tasks for a feature, following the plan generated by `/code-forge:plan`. ## When to Use - Have a generated plan (`state.json` + `tasks/` directory) ready for execution - Need to resume a partially completed feature - Need task-by-task execution with TDD and progress tracking ## Workflow ``` Locate Feature → Confirm Execution → Task Loop (sub-agents) → Verify → Complete ``` ## Context Management Step 11 dispatches a dedicated sub-agent for each task, so code changes from one task don't pollute the context of the next. The main context only handles coordination: reading state, dispatching sub-agents, and updating status. ## Detailed Steps @../shared/configuration.md --- ### Step 1: Locate Feature #### 1.1 With Feature Name Argument If the user provided a feature name (e.g., `/code-forge:impl user-auth`): 1. Look for `{output_dir}/{feature_name}/state.json` 2. If not found, search `{output_dir}/*/state.json` for a feature whose `feature` field matches 3. If still not found, show error: "Feature '{feature_name}' not found. Run `/code-forge:status` to see available features." #### 1.2 Without Argument If no feature name is provided: 1. Scan `{output_dir}/*/state.json` for all features 2. Filter to features with `status` = `"pending"` or `"in_progress"` (exclude `"completed"`) 3. If none found: "No features ready for execution. Run `/code-forge:plan` to create one." 4. If one found: use it automatically 5. If multiple found: display table and use `AskUserQuestion` to let user select #### 1.3 Validate Feature State After locating the feature: 1. Read `state.json` 2. Check that `tasks` array is non-empty 3. Check that task files in `tasks/` directory exist 4. Show feature progress summary: completed/in_progress/pending counts 5. If all tasks are `"completed"`: "All tasks already completed. Run `/code-forge:review {feature}` to review." --- ### Step 10: Ask for Execution Method Use `AskUserQuestion`: - **"Start Execution Now (Recommended)"** — execute tasks one by one, auto-track progress → enter Step 11 - **"Manual Execution Later"** — save plan, show resume instructions (`/code-forge:impl {feature}`) - **"Team Collaboration Mode"** — show guidelines: commit plan to Git, claim tasks via `assignee`, sync `state.json` - **"Generate Plan Only"** — only generate plan files, stop here ### Step 11: Task Execution Loop (via Sub-agents) **Each task is executed by a dedicated sub-agent** to prevent cross-task context accumulation. The main context only handles coordination: reading state, dispatching sub-agents, and updating status. #### 11.1 Coordination Loop (Main Context) 1. Read `state.json` 2. Find the next task in `execution_order` that is `"pending"` with no unmet dependencies 3. If no such task exists: display "All tasks completed!" and exit loop 4. Display: "Starting task: {id} - {title}" 5. Update task status to `"in_progress"` in `state.json` 6. **Dispatch sub-agent** for this task (see 11.2) 7. Review the sub-agent's execution summary 8. Ask user via `AskUserQuestion`: "Is the task completed?" - **"Completed, continue to next"** → update status to `"completed"`, continue loop - **"Encountered issue, pause"** → keep `"in_progress"`, exit loop - **"Skip this task"** → update status to `"skipped"`, continue loop 9. Repeat from step 1 #### 11.2 Task Execution Sub-agent Spawn a `Task` tool call with: - `subagent_type`: `"general-purpose"` - `description`: `"Execute task: {task_id}"` **Sub-agent prompt must include:** - The task file path: `{output_dir}/{feature_name}/tasks/{task_id}.md` (sub-agent reads it) - The project root path - Tech stack and testing strategy (from state.json metadata or plan.md) - Instruction to follow TDD: write tests → run tests → implement → verify - Instruction to return ONLY a concise execution summary **Sub-agent executes:** 1. Read the task file from disk 2. Follow the task steps (TDD: write tests → run tests → implement → verify) 3. Commit changes if all tests pass (with descriptive commit message) **Sub-agent must return** a concise execution summary: STATUS: completed | partial | blocked FILES_CHANGED: - path/to/file.ext (created | modified) - ... TEST_RESULTS: X passed, Y failed SUMMARY: <1-2 sentence description of what was done> ISSUES: **Main context retains:** Only the execution summary (~0.5-1KB per task). All code changes, test outputs, and file reads stay in the sub-agent's context and are discarded. #### 11.3 Parallel Execution (Optional) When multiple pending tasks have **no mutual dependencies** (none depends on another), they may be dispatched as parallel sub-agents using multiple `Task` tool calls in a single message. Each sub-agent works in isolation on its own task. **Use parallel execution only when:** - Tasks modify different files (no overlap in "Files Involved") - Tasks have no dependency relationship (neither `depends on` the other) - User has agreed to parallel execution After all parallel sub-agents complete, review each summary and update `state.json` for all completed tasks before continuing the loop. ### Step 11.5: Verify Generated Files Before completion summary, verify all generated files: **Checks:** 1. Required files exist and are non-empty: `overview.md`, `plan.md`, `state.json` 2. `tasks/` directory exists and contains `.md` files with descriptive names 3. `state.json` is valid JSON with required fields (`feature`, `status`, `tasks`, `execution_order`); task count matches task files; all IDs in `execution_order` match `tasks` entries 4. `plan.md` contains: title heading, `## Goal`, `## Task Breakdown`, `## Acceptance Criteria` 5. `overview.md` contains `## Task Execution Order` table **On pass:** Show checklist with all items passing, continue. **On error (missing required files):** Show what's missing. Attempt auto-fix: - Empty `overview.md` → generate template from plan data - Missing `tasks/` → create directory - Missing `state.json` → generate initial state from task files found Then re-verify. **On warnings (count mismatch, missing optional section):** Show warnings, continue by default. --- ### Step 12: Completion Summary After all tasks are completed: 1. Update `state.json` with final status 2. Regenerate the project-level overview (`{output_dir}/overview.md`) ``` Feature implementation completed! Completed tasks: {completed}/{total} Location: {output_dir}/{feature_name}/ Total time: {actual_time} Next steps: /code-forge:review {feature_name} Review code quality /code-forge:status {feature_name} View final status ```