--- name: plan description: Analyze documentation (or a prompt) and generate an implementation plan with task breakdown, TDD steps, and progress tracking. --- # Code Forge — Plan Generate an implementation plan from a feature document or a requirement prompt. ## When to Use - Have a feature document that needs to be broken into development tasks - Have a requirement idea (text prompt) that needs planning - Need a structured plan with TDD task breakdown ## Workflow ``` Input (Document or Prompt) → Analysis → Planning → Task Breakdown → Status Tracking ``` ## Context Management Steps 2, 6, and 7 are offloaded to sub-agents via the `Task` tool to prevent context window exhaustion on large projects. The main context retains only concise summaries returned by each sub-agent, while full document analysis, file generation, and code implementation happen in isolated sub-agent contexts that are discarded after completion. **Actual execution order:** 0 → **0.9 (reference docs, if configured)** → 0.8 (prompt mode only) → 1 → **2 (sub-agent)** → 3 → 4 → **6 (sub-agent)** → **7 (sub-agent)** → 5 → 8 → 8.5 → 9 Step 5 (overview.md) executes after Steps 6 and 7 because it references task files generated by those steps. ## Detailed Steps @../shared/configuration.md **Plan-specific additions to Step 0:** - **0.2 additional defaults:** `reference_docs.sources` = `[]`, `reference_docs.exclude` = `[]` - **0.3 additional validation:** - `reference_docs.sources` must be an array of strings (fall back to `[]` on error) - `reference_docs.sources` entries must NOT contain `..` (security risk) - `reference_docs.sources` entries must NOT point to system directories (`node_modules/`, `.git/`, `build/`) - `reference_docs.exclude` must be an array of strings (fall back to `[]` on error) - **0.4 additional display:** Resolved file creation path: `{output_dir}/{feature_name}/` - **0.4 error handling:** - Config file not found → note "using defaults" and continue - Config file parse error → show error, fall back to defaults, continue - Invalid config values → show warnings, fall back to defaults for invalid fields, continue - **0.6 path resolution notes:** `base_dir` empty string means project root; `input_dir` default: `docs/features/`; `output_dir` default: `planning/` --- ### Step 0.9: Resolve and Summarize Reference Docs **This step only runs when `reference_docs.sources` is non-empty in the merged configuration.** If `reference_docs.sources` is empty or not configured, skip directly to Step 0.8. #### 0.9.1 Resolve Glob Patterns 1. Resolve each pattern in `config.reference_docs.sources` against `project_root` 2. Apply `config.reference_docs.exclude` patterns to filter results 3. Auto-exclude `{output_dir}/**` to prevent circular references 4. Deduplicate results (same file matched by multiple patterns) 5. If 0 files matched → display: `Reference docs: 0 files matched for configured patterns. Continuing without reference context.` → skip to Step 0.8 6. If > 30 files matched → display file list, use `AskUserQuestion`: "Found {N} reference docs. This will spawn {N} parallel sub-agents." - "Proceed with all {N} files" - "Let me refine the patterns" → show current `sources`/`exclude` config, stop and let user update `.code-forge.json` #### 0.9.2 Display Matched Files Display the matched file list: ``` Reference docs: {count} files matched {path_1} {path_2} ... ``` Proceed directly — no confirmation needed (unless > 30 files triggered 0.9.1 step 6). #### 0.9.3 Parallel Sub-agent Summarization Spawn N parallel sub-agents via `Task` tool, one per matched file: - `subagent_type`: `"general-purpose"` - `description`: `"Summarize reference doc: {filename}"` **Each sub-agent prompt:** - The file path (sub-agent reads it from disk) - Instruction to return ONLY a structured summary in this exact format: ``` DOC_PATH: {file_path} DOC_TYPE: SUMMARY: <2-3 sentence summary of what this document describes> KEY_DECISIONS: RELEVANCE_TAGS: ``` **Target summary size:** ~300-500 bytes per doc. **Error handling:** If a sub-agent fails to summarize a file, log a warning and skip that file: ``` Warning: Failed to summarize {path} — skipping Reference docs: {success_count} of {total_count} files summarized successfully ``` #### 0.9.4 Store Reference Summaries Collect all successful sub-agent results into a `reference_summaries` list (ordered by file path). Store in memory for use by Steps 2, 6, and 7. #### 0.9.5 Deduplicate Against Input Doc After the input document path is known (after Step 1), remove it from `reference_summaries` if present — the feature doc is already read directly by Steps 2 and 6. This deduplication happens lazily: the summaries are stored now, deduplication is applied when injecting into sub-agent prompts. --- ### Step 0.8: Prompt Mode — Delegate to spec-forge:feature **This step only runs when the input is NOT a file path (does NOT start with `@`).** If the input starts with `@`, skip directly to Step 1. When a user provides a text prompt instead of a file path, code-forge:plan delegates feature spec creation to spec-forge:feature. This maintains the separation of concerns: spec-forge owns specification, code-forge owns implementation planning. #### 0.8.1 Generate Slug Convert the prompt text to a kebab-case slug for the feature name: - ASCII text: lowercase, replace spaces/special chars with hyphens (e.g., "User Login Feature" → `user-login-feature`) - Non-ASCII text (Chinese, Japanese, etc.): use `AskUserQuestion` to let user confirm or provide a custom slug. Suggest a reasonable English slug based on the prompt meaning. #### 0.8.2 Check for Existing Feature Spec Check if `{input_dir}/{slug}.md` already exists: - **Exists** → use it directly, skip to 0.8.4 - **Does not exist** → continue to 0.8.3 #### 0.8.3 Auto-Delegate to spec-forge:feature Invoke `spec-forge:feature` to generate the feature spec: Launch `Task(subagent_type="general-purpose")`: - Sub-agent prompt: "Invoke the spec-forge:feature skill for '{slug}'. The user's requirement is: '{original prompt text}'. Use standalone mode — generate the feature spec at docs/features/{slug}.md based on this requirement description. Keep the Q&A minimal since the user already provided context in the prompt." - Wait for completion → verify `docs/features/{slug}.md` exists If spec-forge:feature is not available (skill not installed), fall back to generating a minimal feature document directly: ```markdown # {Feature Title} > Feature spec for code-forge implementation planning. > Source: auto-generated from prompt > Created: {date} ## Purpose {user's original prompt text, verbatim} ## Notes - Generated from prompt by code-forge (spec-forge:feature not available) - Consider running `/spec-forge:feature {slug}` for a more detailed spec ``` #### 0.8.4 Set File Path Set `{input_dir}/{slug}.md` as the current input document path (prefixed with `@`), then continue to Step 1. --- ### Step 1: Validate Input Document #### 1.1 Check Document Path User should provide an `@` path pointing to a **file** or **directory**: ```bash # File mode — plan a single feature /code-forge:plan @docs/features/user-auth.md # Directory mode — list features and let user pick /code-forge:plan @docs/features/ /code-forge:plan @../../aipartnerup/apcore ``` **Note:** Use configured path (`{input_dir}/`). Also accepts spec-forge tech-design files directly: `/code-forge:plan @docs/user-auth/tech-design.md` #### 1.1.1 Directory Mode If the `@` path resolves to a **directory** (not a file): 1. Scan for feature spec files in this order (stop at first match): - `/docs/features/*.md` - `/features/*.md` - `/*.md` 2. If no `.md` files found: display error `"No feature specs found in {path}"` with the paths tried, then stop 3. If exactly 1 file found: use it directly (skip selection) 4. If multiple files found: display list and use `AskUserQuestion` to let user select: ``` Feature specs found in {path}: 1. acl-system.md 2. core-executor.md 3. schema-system.md ... ``` - Options: one per file (show filename without `.md`) 5. Set the selected file as the input document path, then continue to Step 1.2 **Path resolution:** Both relative and absolute paths are supported. Relative paths are resolved from the current working directory. External project paths (e.g., `@../../other-project`) are valid — the feature spec does not need to be inside the current project. #### 1.2-1.4 Validate Document and Handle Errors Perform these checks on the provided document: 1. **File exists** — if not found, list available files in `{input_dir}/` and suggest corrections (check for typos) 2. **File is not empty** — if empty, suggest adding requirements content with a minimal example 3. **File is Markdown** — if not `.md`, warn and ask whether to continue as plain text If no document is provided and Step 0.8 was not triggered: display usage instructions with examples. On any error: display the issue, suggest a fix, and stop. #### 1.5 Detect Existing Plan Check whether `//` already exists: - **Has `state.json`** → **Resume mode**: show progress summary (task statuses), ask via `AskUserQuestion`: - Continue (recommended) — resume from current progress - Restart — delete all files and regenerate - View plan — open plan.md - Cancel - **Directory exists but no `state.json`** → **Conflict mode**: warn about existing files, ask: - Backup and overwrite — move to `.backup/` then regenerate - Force overwrite — overwrite directly - Cancel — handle manually then rerun ### Step 2: Analyze Document Content (via Sub-agent) **Offload to sub-agent** to keep the full document content out of the main context. Spawn a `Task` tool call with: - `subagent_type`: `"general-purpose"` - `description`: `"Analyze feature document"` **Sub-agent prompt must include:** - The input document file path (so the sub-agent reads it, NOT the main context) - Instruction to return ONLY a structured summary - If `reference_summaries` is non-empty (from Step 0.9), include a `## Reference Context` section: ``` ## Reference Context The following project documents provide architectural context. Use these to align your analysis with existing project decisions and patterns. {reference_summaries — all summaries concatenated, separated by blank lines} ``` **Sub-agent must analyze and return:** 1. **Feature Name** — extracted from filename or document title (kebab-case) 2. **Technical Requirements** — tech stack, frameworks, languages mentioned 3. **Functional Scope** — 2-3 sentence summary of what needs to be implemented 4. **Constraints** — performance, security, compatibility requirements 5. **Testing Requirements** — testing strategy mentioned, or "not specified" 6. **Key Components** — major modules/components to build (bulleted list) 7. **Estimated Complexity** — low/medium/high with brief rationale **Main context retains:** Only the structured summary returned by the sub-agent (~1-2KB). The full document content stays in the sub-agent's context and is discarded. **Important:** Store the returned summary for use in Steps 3 and 6. ### Step 3: Ask for Additional Information If not clearly specified in the document, use a **single** `AskUserQuestion` combining up to 3 questions. Skip any question already answered by the document: **Question 1: Technology Stack Confirmation** - "Use {extracted_tech} mentioned in document" - "Use existing project tech stack" — analyze project code, use existing frameworks - "Custom" — user specifies **Question 2: Testing Strategy** - "Strict TDD (Recommended)" — write tests first for each task - "Tests After" — implement first, write tests at end - "Minimal Testing" — test only core logic **Question 3: Task Granularity** - "Fine-grained (5-10 tasks)" — each task 1-2 hours - "Medium-grained (3-5 tasks)" — each task half day - "Coarse-grained (2-3 tasks)" — each task 1-2 days ### Step 4: Create Directory Structure Extract feature name from filename or document title (convert to kebab-case). **Output directory:** `{output_dir}` defaults to `planning/` — **NOT** `docs/plans/`, `docs/planning/`, or any other path. Always use the resolved `output_dir` from Step 0 configuration. Create directory structure and **proceed directly** — no confirmation needed: ``` {output_dir}/{feature_name}/ ├── overview.md ├── plan.md ├── tasks/ └── state.json ``` Example with defaults: `planning/user-auth/`, `planning/user-auth/tasks/`, etc. ### Step 6: Generate plan.md (via Sub-agent) **Offload to sub-agent** to keep plan generation output out of the main context. Spawn a `Task` tool call with: - `subagent_type`: `"general-purpose"` - `description`: `"Generate implementation plan"` **Sub-agent prompt must include:** - The input document file path (sub-agent re-reads the original for full context) - The structured summary from Step 2 (paste it into the prompt) - User answers from Step 3 (tech stack choice, testing strategy, task granularity) - The output file path: `{output_dir}/{feature_name}/plan.md` - Instructions to write the plan file AND return a concise task list summary - If `reference_summaries` is non-empty, include a `## Reference Context` section: ``` ## Reference Context The following project documents provide architectural context. Ensure the implementation plan is consistent with existing architecture and conventions. {reference_summaries — all summaries concatenated, separated by blank lines} ``` **Sub-agent must write `plan.md`** with these required sections: - **Goal** — one sentence describing what to implement - **Architecture Design** — component structure, data flow, technical choices with rationale - **Task Breakdown** — dependency graph (mermaid `graph TD`) + task list with estimated time and dependencies - **Risks and Considerations** — identified technical challenges - **Acceptance Criteria** — checklist (tests pass, code review, docs, performance) - **References** — related technical docs and examples **Task ID naming rules (critical):** Task IDs must be descriptive names **without numeric prefixes**. Use `setup`, `models`, `api` — **NOT** `01-setup`, `02-models`, `03-api`. Execution order is controlled by `overview.md` and `state.json`, not by filename ordering or numeric prefixes. **Sub-agent must return** (as response text, separate from the file it writes) a concise task list summary: TASK_COUNT: TASKS: - : [depends on: ] (~) - : [depends on: ] (~) ... EXECUTION_ORDER: , , ... **Main context retains:** Only the task list summary (~1-2KB). The full plan content is on disk. ### Step 7: Task Breakdown (via Sub-agent) **Offload to sub-agent** to keep task file generation out of the main context. Spawn a `Task` tool call with: - `subagent_type`: `"general-purpose"` - `description`: `"Generate task breakdown files"` **Sub-agent prompt must include:** - The plan file path: `{output_dir}/{feature_name}/plan.md` (sub-agent reads it from disk) - The task list summary returned by Step 6 (paste it into the prompt) - The tasks directory path: `{output_dir}/{feature_name}/tasks/` - All the principles and format requirements below - If `reference_summaries` is non-empty, include a `## Reference Context` section: ``` ## Reference Context The following project documents provide architectural context. Ensure task steps follow project conventions and integrate with existing components. {reference_summaries — all summaries concatenated, separated by blank lines} ``` **Sub-agent must create `tasks/{name}.md`** for each task, following these principles: - TDD first: test → implement → verify - Concrete steps: include code examples and commands - Traceable: annotate dependencies (depends on / required by) **Each task file must include:** - **Goal** — what this task accomplishes - **Files Involved** — files to create/modify - **Steps** — numbered, with code examples where helpful - **Acceptance Criteria** — checklist - **Dependencies** — depends on / required by - **Estimated Time** **Naming (critical):** Use descriptive filenames: `setup.md`, `models.md`, `api.md` — **NO numeric prefixes** (`01-setup.md`, `02-models.md` are WRONG). Execution order is defined in `overview.md` Task Execution Order table and `state.json` `execution_order` array, never in filenames. **Sub-agent must return** (as response text) the list of generated files: GENERATED_FILES: - tasks/.md: - tasks/.md: ... **Main context retains:** Only the file list (~0.5KB). All task file content is on disk. ### Step 5: Generate overview.md **Execution order:** This step executes AFTER Steps 6 and 7. Use the task list summary returned by the Step 6 sub-agent and the file list returned by the Step 7 sub-agent to populate task-related sections. Generate feature overview with these required sections: - **Overview** — extract or summarize from source document - **Scope** — included and excluded items - **Technology Stack** — language/framework, key dependencies, testing tools - **Task Execution Order** — table: #, Task File (linked to `./tasks/`), Description, Status - **Progress** — total/completed/in_progress/pending counts - **Reference Documents** — link to source document ### Step 8: Initialize state.json Create `state.json` with these required fields: | Field | Description | |-------|-------------| | `feature` | Feature name (string) | | `created`, `updated` | ISO timestamps | | `status` | `"pending"` initially | | `execution_order` | Array of task IDs in execution order | | `progress` | `{ total_tasks, completed, in_progress, pending }` | | `tasks` | Array of task objects (see below) | | `metadata` | `{ source_doc, created_by: "code-forge", version: "1.0" }` | Each task object in the `tasks` array: | Field | Description | |-------|-------------| | `id` | Task identifier (matches filename without `.md`) | | `file` | Relative path: `tasks/{id}.md` | | `title` | Human-readable task title | | `status` | `"pending"` initially | | `started_at`, `completed_at` | ISO timestamps or `null` | | `assignee` | `null` initially | | `commits` | Empty array `[]` initially | ### Step 8.5: Generate/Update Project-Level Overview After initializing `state.json`, generate or update `{output_dir}/overview.md` — a bird's-eye view of all features. @../shared/overview-generation.md #### 8.5.3 When to Regenerate - After creating a new feature plan (this step) - After feature completion Display: `Project overview updated: {output_dir}/overview.md` --- ### Step 9: Display Plan and Next Steps Output plan summary: ``` Implementation plan generated Location: {output_dir}/{feature_name}/ Total Tasks: {count} Estimated Total Time: {estimate} Task Overview: {id} - {title} [{status}] ... Next steps: /code-forge:impl {feature_name} Execute tasks /code-forge:status {feature_name} View progress cat {output_dir}/{feature_name}/plan.md View detailed plan ``` ## Integration with Claude Code Tasks Optionally synchronize tasks to Claude Code's Task system: - For each task in `execution_order`, call `TaskCreate` with: - `subject`: `": "` - `description`: contents of the task file - `activeForm`: `"Implementing "` ## Coordination with Other Skills - **With spec-forge:feature**: Generate feature spec first → `/code-forge:plan @docs/features/{feature}.md` - **With spec-forge tech-design**: Plan directly from tech-design → `/code-forge:plan @docs/{feature}/tech-design.md` - **With /brainstorming**: Brainstorm design first → generate feature spec → `/code-forge:plan @docs/features/{feature}.md` - **With /code-forge:impl**: After plan generated → `/code-forge:impl {feature}` to execute - **With /code-forge:review**: After implementation → `/code-forge:review {feature}` to review ## Notes 1. **Document Quality**: The more detailed the input document, the more accurate the generated plan 2. **Prompt Mode**: When using prompt mode, the generated document is minimal. Step 2 sub-agent analysis handles expansion. 3. **Git Commits**: Recommend committing the planning directory and `.code-forge.json` to Git for team visibility 4. **State Files**: `state.json` can be optionally committed or added to .gitignore 5. **Task Granularity**: Recommend 1-3 hours per task for easy tracking 6. **Dependency Management**: Dependencies between tasks affect execution order 7. **Project Overview**: The project-level `overview.md` in `{output_dir}/` is auto-generated and shows all features, dependencies, and recommended implementation order 8. **Tool Discovery**: `.code-forge.json` contains a `_tool` section with the plugin URL — new team members can find and install the tool from there 9. **Status Definitions**: `pending`, `in_progress`, `completed`, `blocked`, `skipped` 10. **Directory Structure**: ``` docs/ └── features/ # Input: feature specs (owned by spec-forge) └── user-auth.md # Generated by /spec-forge:feature or extracted from tech-design planning/ # Output: implementation plans (owned by code-forge) ├── overview.md # Project-level overview (auto-generated) └── {feature}/ # Per-feature directory ├── overview.md # Feature overview + task execution order ├── plan.md # Implementation plan ├── tasks/ # Task breakdown files └── state.json # Status tracking ``` 11. **Naming Conventions**: Feature directories use kebab-case (`user-auth`). Task files use descriptive names (`setup.md`). No "claude-" or tool prefixes. Suitable for Git commits. 12. **Reference Docs**: Configure `reference_docs.sources` in `.code-forge.json` to auto-discover project documentation. Each doc is summarized by a parallel sub-agent and injected as context into Steps 2, 6, and 7. Reference context is baked into generated plan.md and task files — downstream skills do not re-read reference docs.