--- name: research description: "Deep research skill for technical topics. Invoke when user asks to research a technology, library, architecture pattern, or solution approach. Uses iterative multi-source search: each round informs the next search query. Outputs structured report with findings, analysis, trade-offs, and recommendation. Modes: shallow (quick overview), standard (3-5 sources), deep (iterative until convergence)." allowed-tools: - Bash - WebSearch - WebFetch - Read - Grep metadata: token-cost-tier: instruction token-cost-estimate: 5000 --- # /research — Iterative Multi-Source Technical Research ## Usage ``` /research [--depth shallow|standard|deep] [--focus ] [--output report|bullets|compare] ``` - `` — what to research (e.g., "FastAPI vs Django for async APIs", "Redis pub/sub patterns") - `--depth shallow` — 1 round, 2-3 sources, ~1500 tok - `--depth standard` — 2 rounds, 4-6 sources, ~5000 tok (default) - `--depth deep` — iterative until convergence, 6-10+ sources, ~12000 tok - `--focus ` — lens to apply: performance | security | scalability | implementation | comparison - `--output report` — full structured report (default) - `--output bullets` — concise bullet list - `--output compare` — comparison table --- ## Core Methodology (from iterative research principles) **Each round:** 1. Search → read top results 2. Extract key facts, open questions, contradictions 3. Identify what you still don't know 4. Form next-round queries targeting the gaps 5. Repeat until no new material information is found (deep mode) or round limit reached **Source hierarchy (prefer in order):** 1. Official documentation (highest trust) 2. GitHub issues/PRs (real-world problems) 3. Engineering blogs from companies that use the tech at scale 4. Benchmarks with methodology disclosed 5. Community discussion (Stack Overflow, Reddit, Discord) — lowest trust, only for "gotchas" --- ## Step 0 — Announce Plan ``` Research Plan: Depth: Round 1: Initial survey (~N sources) Round 2: Gap-filling (~N sources) [if standard/deep] ... Estimated: ~ tokens ``` --- ## Step 1 — Local Scan First (free, fast) Before searching the web, check if we already have relevant indexed knowledge: ```bash # Check if library is already indexed ls .claude/lib-index/ 2>/dev/null | grep -i "" ``` ```bash # Check project knowledge for relevant context grep -rl "" .claude/knowledge/ 2>/dev/null | head -5 ``` If found: load the index first, note what's already known, then search for gaps only. --- ## Step 2 — Round 1: Initial Survey Form 2-3 targeted search queries. Prefer specific queries over broad ones. **Query construction rules:** - Include version/year if relevant: `"FastAPI 0.115 async patterns 2024"` - Include context: `"Redis pub/sub vs Kafka when to use"` - Include problem frame: `"PostgreSQL connection pooling production gotchas"` For each source found: 1. Fetch the page 2. Extract: core concept, key facts, caveats, version requirements 3. Note source type (official docs / engineering blog / benchmark / community) **For code-related research:** Also check GitHub: ```bash # Search for real-world usage patterns gh search repos " language:typescript stars:>100" --limit 5 2>/dev/null | head -10 # Check for known issues gh search issues " is:open label:bug" --limit 5 2>/dev/null | head -10 ``` **Extract from each source:** - Main recommendation / approach - Trade-offs mentioned - Caveats / gotchas - Contradictions with other sources --- ## Step 3 — Gap Analysis (between rounds) After Round 1, identify: ``` ## Gap Analysis after Round 1 Known: - - Unknown / conflicting: - → will search in Round 2 - → needs resolution Reliability notes: - claims Y but no benchmark cited — unverified ``` Stop at shallow depth. Continue to Round 2 at standard/deep. --- ## Step 4 — Round 2+: Gap-Filling Form new queries specifically targeting the gaps identified. Focus on: - Resolving contradictions (find the canonical source) - Finding real-world scale examples for unverified claims - Locating benchmarks or case studies Repeat rounds until: - No meaningful new information in last round (deep mode convergence) - Round limit reached (standard: 2 rounds, deep: up to 4 rounds) --- ## Step 5 — Synthesis ### If `--output report` (default): ``` # Research Report: Date: | Depth: | Sources: ## TL;DR (3 sentences max) ## Key Findings ### Finding 1: Source: []() — ### Finding 2: ... ## Trade-offs | Aspect | Option A | Option B | |--------|----------|----------| | Performance | ... | ... | | Scalability | ... | ... | | Complexity | ... | ... | ## Gotchas & Caveats - (Source: X) - (Source: Y) ## Recommendation For , use because . If , consider instead because . ## What This Research Can't Answer - - ## Sources 1. [](<url>) — <type> — fetched <date> ... ``` ### If `--output bullets`: ``` ## <topic> — Research Summary - <key fact 1> - <key fact 2> - **Recommendation:** <one line> - **Watch out:** <main gotcha> ``` ### If `--output compare`: ``` ## <option A> vs <option B> | Criterion | <A> | <B> | Winner | |-----------|-----|-----|--------| ... ``` --- ## Honesty Rules - **Never state something as fact without a source.** If you infer, say "likely" or "based on X". - **Never invent API signatures or method names.** If you're not sure an API exists, say so. - **Contradict official docs if real-world evidence is stronger**, but cite both. - **Flag version-specific information**: "As of v2.3, this behavior changed" - **"What this research can't answer"** section is mandatory — it prevents overconfidence. --- ## Example Queries ``` /research "FastAPI vs Django REST for high-throughput async API" --depth standard --focus performance /research "Redis pub/sub patterns for real-time notifications" --depth deep --output compare /research "PostgreSQL row-level security with multi-tenant SaaS" --depth standard --focus security /research "React Server Components vs client-side fetching tradeoffs" --depth shallow --output bullets ```