--- name: compound description: Capture learnings (problems, decisions, gotchas) from sessions to make future agents more effective argument-hint: [task-identifier] --- Capture and compound knowledge from any session. Extracts problems solved, decisions made, and gotchas discovered - writing them to the task file for future agent retrieval. This skill can run after any phase or standalone session. It compounds the agent's effectiveness over time. ./apex/tasks/[ID].md future-agent-notes AGENTS.md I'll capture learnings from this session to help future agents. Please provide the task identifier, or I'll try to find the current task from context. You can find active tasks in `./apex/tasks/` or run with: `/apex:compound [identifier]` Load task file and begin knowledge capture. 1. Read `./apex/tasks/[identifier].md` 2. Parse all sections for context: - `` - What was investigated - `` - What was decided - `` - What was built, issues encountered - `` - Review findings, reflection 3. Review conversation history for additional context 4. Extract candidate learnings for each category Avoid duplication and surface related past learnings. ```bash # Extract keywords from current task keywords = extract_keywords(task_intent, task_title) # Search past task files for similar learnings Grep: "[keywords]" in apex/tasks/*/ Grep: ".*[keywords]" in apex/tasks/*.md Grep: ".*[keywords]" in apex/tasks/*.md Grep: ".*[keywords]" in apex/tasks/*.md ``` Display to user: ``` Found potentially related learnings: 1. [task-id]: "[Brief description of learning]" 2. [task-id]: "[Brief description of learning]" Continue documenting? (These will be linked as related) - Yes - Continue (will add to related_tasks) - Skip - This is duplicate, don't document ``` Wait for user response before proceeding. Proceed directly to step 3. Review the task file and conversation to identify: **Problems Encountered**: - What broke or didn't work as expected? - What symptoms were observed? - What was the root cause? - How was it fixed? - How can it be prevented in future? **Decisions Made**: - What architectural or implementation choices were made? - What alternatives were considered? - Why was this choice made over others? **Gotchas Discovered**: - What surprised you or was counterintuitive? - What looked like X but actually was Y? - What documentation was misleading or missing? Ask yourself: 1. "What would have saved time if I knew it at the start?" 2. "What mistake did we make that future agents should avoid?" 3. "What decision required significant thought that future agents can reuse?" 4. "What behavior was unexpected or undocumented?" Only document if there's at least ONE meaningful learning. If the task was trivial with nothing surprising, say: "No significant learnings to capture from this session." ```xml [ISO timestamp] [Clear description of the problem] - [Observable sign 1] - [Observable sign 2] [Why it happened] [What fixed it] [How to avoid in future] [What we chose] [What we considered] [Why this choice - be specific] [Surprising thing - be specific and actionable] ``` - Be specific, not vague ("Missing index on users.organization_id" not "database was slow") - Include actionable prevention/rationale - Skip empty sections (don't include `` if no problems) - Each item should be self-contained and understandable without full context 1. Read current task file 2. Append `` section after `` (or last existing section) 3. Update frontmatter with `related_tasks` if similar tasks were found in step 2 **Frontmatter update** (if related tasks found): ```yaml --- # ... existing frontmatter ... related_tasks: [task-id-1, task-id-2] --- ``` Display captured learnings: ``` Learnings captured: Problems (N): - [Brief summary of each] Decisions (N): - [Brief summary of each] Gotchas (N): - [Brief summary of each] Related tasks linked: [list or "none"] ``` Critical learnings should be "always loaded" context in AGENTS.md. ``` Promote any to AGENTS.md? (These become "always loaded" context) 1. [Problem] [Brief description] 2. [Decision] [Brief description] 3. [Gotcha] [Brief description] N. None - done Select (numbers comma-separated, or N for none): ``` - If "N" or "none": Skip to final step - If numbers selected: Proceed to promotion 1. Read AGENTS.md fully 2. Find `## Learnings` section (create at end if doesn't exist) 3. Check for duplicates (don't add if similar already exists) 4. Append selected items in condensed format 5. Write updated AGENTS.md ```markdown ## Learnings ### Problems - **[Short title]** - [1-2 sentence description with actionable prevention]. (from [task-id], [date]) ### Decisions - **[Choice made]** - [1-2 sentence rationale]. (from [task-id], [date]) ### Gotchas - **[Short title]** - [1-2 sentence explanation]. (from [task-id], [date]) ``` - Group by type (Problems, Decisions, Gotchas) - Create subsection if doesn't exist - Append to existing subsection if it exists - Always include source task and date Before adding, search for similar content: ```bash Grep: "[key phrase from learning]" in AGENTS.md ``` If similar exists, skip with message: "Similar learning already in AGENTS.md, skipping." - Task file read and context gathered - Existing learnings checked for duplicates - Meaningful learnings identified (or explicitly none) - `` section written to task file - `related_tasks` updated in frontmatter if applicable - Promotion to AGENTS.md offered - Selected items promoted with proper format - Duplicates avoided in AGENTS.md - After `/apex:ship` completes (ship will prompt you) - After debugging sessions - After any significant problem-solving - After making architectural decisions - When you discover something surprising - Anytime knowledge would help future agents