# Agents Specialized agents that do heavy work and return concise summaries to preserve context. ## Core Philosophy > β€œDon't anthropomorphize subagents. Use them to organize your prompts and elide > context. Subagents are best when they can do lots of work but then provide > small amounts of information back to the main conversation thread.” > > – Adam Wolff, Anthropic ## Available Agents ### πŸ” `code-analyzer` - **Purpose**: Hunt bugs across multiple files without polluting main context - **Pattern**: Search many files β†’ Analyze code β†’ Return bug report - **Usage**: When you need to trace logic flows, find bugs, or validate changes - **Returns**: Concise bug report with critical findings only ### πŸ“„ `file-analyzer` - **Purpose**: Read and summarize verbose files (logs, outputs, configs) - **Pattern**: Read files β†’ Extract insights β†’ Return summary - **Usage**: When you need to understand log files or analyze verbose output - **Returns**: Key findings and actionable insights (80-90% size reduction) ### πŸ§ͺ `test-runner` - **Purpose**: Execute tests without dumping output to main thread - **Pattern**: Run tests β†’ Capture to log β†’ Analyze results β†’ Return summary - **Usage**: When you need to run tests and understand failures - **Returns**: Test results summary with failure analysis ### πŸ”€ `parallel-worker` - **Purpose**: Coordinate multiple parallel work streams for an issue - **Pattern**: Read analysis β†’ Spawn sub-agents β†’ Consolidate results β†’ Return summary - **Usage**: When executing parallel work streams in a worktree (all agents work in the same worktree directory with file-level isolation) - **Returns**: Consolidated status of all parallel work ## Why Agents? Agents are **context firewalls** that protect the main conversation from information overload: ``` Without Agent: Main thread reads 10 files β†’ Context explodes β†’ Loses coherence With Agent: Agent reads 10 files β†’ Main thread gets 1 summary β†’ Context preserved ``` ## How Agents Preserve Context 1. **Heavy Lifting** - Agents do the messy work (reading files, running tests, implementing features) 2. **Context Isolation** - Implementation details stay in the agent, not the main thread 3. **Concise Returns** - Only essential information returns to main conversation 4. **Parallel Execution** - Multiple agents can work simultaneously without context collision ## When to Use Which Agent? Use this decision tree to select the right agent: ``` Need to understand a log file or verbose output? β†’ file-analyzer Made code changes and need to check for bugs? β†’ code-analyzer Need to run tests and understand failures? β†’ test-runner Large feature spanning multiple independent areas? β†’ parallel-worker Simple task (single file, straightforward change)? β†’ No agent needed, work directly ``` ## Example Usage ```bash # Analyzing code for bugs Task: "Search for memory leaks in the codebase" Agent: code-analyzer Returns: "Found 3 potential leaks: [concise list]" Main thread never sees: The hundreds of files examined # Running tests Task: "Run authentication tests" Agent: test-runner Returns: "2/10 tests failed: [failure summary]" Main thread never sees: Verbose test output and logs # Parallel implementation Task: "Implement issue #1234 with parallel streams" Agent: parallel-worker Returns: "Completed 4/4 streams, 15 files modified" Main thread never sees: Individual implementation details ``` ## Creating New Agents New agents should follow these principles: 1. **Single Purpose** - Each agent has one clear job 2. **Context Reduction** - Return 10-20% of what you process 3. **No Roleplay** - Agents aren't "experts", they're task executors 4. **Clear Pattern** - Define input β†’ processing β†’ output pattern 5. **Error Handling** - Gracefully handle failures and report clearly ## Anti-Patterns to Avoid ❌ **Creating "specialist" agents** (database-expert, api-expert) Agents don't have different knowledge - they're all the same model ❌ **Returning verbose output** Defeats the purpose of context preservation ❌ **Making agents communicate with each other** Use a coordinator agent instead (like parallel-worker) ❌ **Using agents for simple tasks** Only use agents when context reduction is valuable ## Anthropic Workflow Patterns Our agents implement proven patterns from [Anthropic's research on building effective agents](https://www.anthropic.com/research/building-effective-agents): | Agent | Pattern | How It Works | | ------------------- | -------------------- | -------------------------------------------------------------------------- | | **code-analyzer** | Evaluator-Optimizer | Reviews code iteratively, provides feedback, suggests improvements | | **file-analyzer** | Context Reduction | Pure summarization - not a workflow, but a building block | | **test-runner** | Orchestrator-Workers | Delegates test execution, analyzes results, synthesizes findings | | **parallel-worker** | Orchestrator-Workers | Dynamically spawns sub-agents, coordinates execution, consolidates results | These patterns are battle-tested across Anthropic's customer base and represent the most effective approaches for agentic systems. ## Agent Performance Guidelines When using agents effectively: **Context Reduction Targets:** - file-analyzer: 80-90% reduction (return 10-20% of processed content) - code-analyzer: Return only critical findings, not all examined code - test-runner: Summarize results, not full test output - parallel-worker: Consolidate sub-agent results into single summary **Performance Boundaries:** - file-analyzer: Files >10K lines β†’ focus on key sections only - code-analyzer: >50 files changed β†’ analyze critical paths first - test-runner: >100 tests β†’ batch and summarize by category - parallel-worker: >10 streams β†’ serialize based on dependencies **When NOT to Use Agents:** - Single file reads (use Read tool directly) - Simple grep searches (use Grep tool directly) - Straightforward edits (use Edit tool directly) - Questions that don't require processing many files **Agent Overhead:** - Agents add 30-90 seconds of execution time - Only use when context savings justify the time cost - For quick tasks, direct tool usage is more efficient ## Agent Testing To ensure agents work effectively, test them with these scenarios: **Test Your Agents:** 1. **Run realistic examples** - Use actual project files and tasks 2. **Check output quality** - Verify summaries preserve critical information 3. **Measure context reduction** - Calculate token savings vs direct tool use 4. **Test edge cases** - Empty results, too many results, ambiguous inputs 5. **Iterate on failures** - When agents make mistakes, refine their prompts **Common Failure Patterns:** - Agent returns too much detail (violates context reduction goal) - Agent misses critical information (too aggressive summarization) - Agent makes incorrect assumptions (needs better examples) - Agent fails on edge cases (needs explicit handling) Each agent definition includes an "Edge Cases" section with specific handling instructions for common scenarios.