--- name: parallax description: Use when the user wants multi-perspective, multi-agent research, steelmanned debate coverage, stakeholder analysis, or a balanced report on a contested topic. Also triggers on the /parallax slash command. --- # Parallax — multi-agent research Given a topic, parallax assembles a team of analyst agents, each with a distinct identity and research mission: 1. Recruits a tailored roster of analyst agents for the topic. 2. Lets the user approve the roster, or edit it by saying things like "add X" or "remove Y". 3. Deploys each agent to research the topic from its own viewpoint, staying in role — drawing on available skills, MCP tools, and conversation context before falling back to web search. 4. Synthesizes agent reports into one structured markdown document: convergent findings, comparison table (when it fits), agreements, disagreements, open questions, and per-agent deep-dives. All state lives in the conversation. There is no file saved between turns — the "current roster" is whatever agent list was most recently shown in the chat. > **Requires at least one research tool.** Web search is the default, but agents will also use available skills, MCP servers, and conversation context. If no research tools at all are available, tell the user and stop — don't attempt to run research from memory. ## Commands | Command | Meaning | |---|---| | `/parallax ""` | Start a new run on the topic. | | `ok` / `go` / `proceed` / `looks good` | Approve the roster — prompts for deployment mode. | | `run sequentially` | Approve the roster and deploy agents one at a time immediately. | | `run in parallel` | Approve the roster and deploy all agents simultaneously immediately. | After the roster is shown (Step 3), also accept natural add/remove instructions like `add Economist`, `add Economist and Trade Unionist`, `remove 2`, `drop 2,5`, `remove the regulator one`. The goal is low-friction editing. ## Workflow ### Step 1 — Classify the invocation Look at the user's message and decide which of these it is: - **New topic.** A string that reads as a fresh topic (or `/parallax ""`). Go to Step 2. - **Edit command.** After Step 3 has shown a roster: any message that adds or removes agents (e.g. "add X", "remove 2", "drop the regulator one"). Go to Step 3 directly, mutating the roster already in context. - **Approval signal.** `ok`, `go`, `proceed`, `yes`, etc. Go to Step 3.5 (deployment mode selection). - **Mode + approval.** `run sequentially` or `run in parallel`. Approve the roster and go directly to Step 4 in the chosen mode, skipping Step 3.5. If an edit command arrives but there is no in-progress roster visible in the conversation, say so and ask the user to start a run with `/parallax ""`. Do not fabricate a roster. ### Step 2 — Recruit analyst agents Assemble a roster of 4–10 analyst agents for the topic. - **Adaptive count.** Narrow factual questions deserve ~4 agents. Broad debates ("was Brexit a success", "is nuclear power good or bad") can warrant the full 10. Note that 8+ agents at 5 searches each is ~40–50 web operations — prefer the lean default (3 searches) for large rosters, and avoid exceeding 8 agents unless the topic genuinely demands it. - **Each agent is a real role or stakeholder** with an actual opinion and agenda — e.g. "Short-seller", "EU regulator", "Open-source maintainer", "Leave-voting small business owner". Not abstract framings like "Pessimistic view" or "Long-term perspective". - **Diverse in kind.** Don't just vary optimism/pessimism. Mix insider vs. outsider, regulator vs. industry, winner vs. loser from the thing, domestic vs. foreign, expert vs. affected layperson. For each agent, hold onto: short name, one-line role description, one-line rationale for why this agent's viewpoint matters for this topic. ### Step 3 — Present the roster and wait Render the current agent roster as a numbered markdown list: ``` I propose this analyst team for ****: 1. **** — *Why:* 2. **** — *Why:* ... Reply with one of: - `ok` to approve (you'll choose deployment mode next) - `run sequentially` or `run in parallel` to approve and set mode in one step - `add ` to add an agent - `remove ` to drop one - Or combine freely — plain language is fine ``` Then **stop and wait** for the user. When they respond: - Apply the edits to the roster. - If they added or removed anything, re-show the updated numbered roster and wait again. - Loop until the user signals approval. - If the roster ever becomes empty, refuse approval — tell them the roster is empty and ask them to add at least one agent. Never skip this confirm loop, even if the roster looks obviously right. The user's edits are the whole point of the flow. ### Step 3.5 — Deployment mode selection When the user approves with `ok` / `go` / `proceed` / `yes` (without specifying a mode), show this prompt and wait: ``` Agent roster approved — N analysts ready. How would you like to deploy them? - **Sequential** — agents deployed one at a time with progress notes as each completes. You can say "more depth" mid-run to increase search depth. Works in all environments. - **Parallel** — all agents deployed simultaneously as subagents. Much faster, but no mid-run control and all reports arrive at once. Requires Claude Code or Claude Cowork — not available in claude.ai web chat. Reply `sequential` or `parallel`. ``` If the user already said `run sequentially` or `run in parallel` during the edit loop, skip this step entirely and proceed directly to Step 4 in the chosen mode. ### Step 4 — Deploy agents Once the mode is known, proceed accordingly. #### Sequential deployment Deploy each agent **one at a time, in roster order**. Print a brief one-liner and begin immediately — do not wait for a reply: > Deploying N agents sequentially (~3 searches each). Say "more depth" at any point to switch to ~5 per remaining agent. Default to ~3 searches per agent. If the user says "more depth" mid-run, apply it to all remaining agents. For each agent: 1. **Survey available resources.** Before searching the web, check what's already available: - **Skills:** Are any loaded skills relevant to this agent's domain? (e.g. a `financial-analysis` skill for a short-seller, a `legal-research` skill for a regulator). If so, invoke them as part of the research plan. - **MCP tools:** Are there MCP servers that provide relevant structured data? (e.g. a stock-data API, a news feed, a company database, a document store). If so, query them before or instead of web search. - **Conversation context:** Has the user shared relevant documents, files, data, or prior research in this conversation? If so, draw on that first. Use whatever combination of sources is available. Web search fills remaining gaps — it is not mandatory if other sources already cover the agent's research needs. 2. **Plan 2–3 research angles** that this agent would actually pursue given its role and agenda. Stay in role. A short-seller looks for accounting irregularities, insider sales, deteriorating fundamentals — not "Company X overview". A regulator looks for compliance history, enforcement precedents, cross-border jurisdictional questions. Adapt the angles to use the best available source for each. 3. **Execute the research plan** using the tools identified in step 1. For web search, use whatever tool is available (`web_search`, `search`, `brave_search`, etc.). Fetch the 1–2 most promising results per angle to read full content rather than relying on snippets (`web_fetch`, `fetch`, `read_url`, etc.). 4. **Track claim → source pairs** as you go. For non-URL sources (MCP data, uploaded files, skills), note the source type and identifier rather than a URL. 5. **No padding.** If a source returns nothing useful, say so rather than stretching weak results. If the agent genuinely has no relevant findings on this topic, record "no useful findings — agent N/A for this topic" and move on. A short progress note between agents is fine ("Short-seller done — deploying the regulator"). Do not print interim findings inline; hold everything for the synthesized report. This keeps context clean and the final report coherent. #### Parallel deployment Print a brief one-liner, then dispatch all agents simultaneously as subagents: > Deploying N agents in parallel. Reports will arrive together — sit tight. **Dispatching subagents:** For each agent, spawn a subagent with a self-contained prompt that includes: - The topic - The agent's name, role description, and rationale - The research instructions below (search angles, budget, fetch rule, tracking, no-padding rule) - An instruction to return a structured findings block (see format below) Each subagent has no conversation context — brief it fully. Do not rely on shared state. **Each subagent researches as its assigned agent:** 1. Survey available resources first: check for relevant skills, MCP tools, and any context provided in this prompt before defaulting to web search. Use whatever combination of sources is available. 2. Plan 2–3 research angles this agent would pursue given its role and agenda. Stay in role. Adapt angles to the best available source for each. 3. Execute the research plan. Use web search (`web_search`, `search`, etc.) for angles not covered by other tools. Fetch the 1–2 most promising results per angle for full content. 4. Track claim → source pairs throughout. Note source type for non-URL sources. 5. If a source returns nothing useful, record "no useful findings" rather than padding. **Each subagent returns a structured findings block:** ``` ## **Key findings** - [source](url) **Sources consulted** - — relevance note ``` Collect all findings blocks from subagents, then proceed to Step 5 to synthesize. Do not print interim findings. "More depth" is not available in parallel mode — if the user asks mid-run, note that it applies to sequential mode only. ### Step 5 — Synthesize the report Render the final report as a single markdown document in exactly this section order: ```markdown # — multi-agent research *Generated YYYY-MM-DD · N agents · parallax* ## Executive synthesis <2–4 paragraphs. Convergent findings first, then key disagreements, then open questions. Inline [source](url) citations on every factual claim. No fabrication — if no source supports a claim, don't make it.> ## Comparison table ## Areas of agreement - [source](url) [source](url) ## Key disagreements - **** — agent A says X [source](url); agent B says Y [source](url). ## Open questions - --- ## Agent reports (deep-dives) ### **Role:** **Key findings** - [source](url) - [source](url) **Notable disagreements with other agents** *(omit this subsection entirely if nothing to say)* - **Sources consulted** - — short note on relevance - — short note on relevance ### ... ``` Notes: - Use today's date for the `*Generated …*` line. - The headline agent count (`N agents`) counts only agents that produced real findings. Agents that returned "N/A" or failed mid-research still get a deep-dive entry with their status flagged, but are excluded from the headline number and from the comparison/agreement/disagreement sections. - For a substantial report, consider rendering it as a markdown artifact or file so the user can save and share it easily. Fall back to inline markdown if that isn't available. - After printing the report, proceed to Step 6 (export prompt). Do not propose follow-up research runs or suggest new topics — the user can start another run themselves. ### Step 6 — Export (optional) After printing the report, show this prompt and wait: ``` Reply "word", "pdf", or "md" for a downloadable report in the stated format. ``` **If `md`:** Write the report directly to `parallax-report.md` and report the file path. Done. **If `word` or `pdf`:** 1. Write the report to `parallax-report.md` using the file-write tool. 2. Detect what conversion tools are available, in this priority order: - **pandoc** — run `pandoc parallax-report.md -o parallax-report.docx` or `.pdf`. For PDF, try `--pdf-engine=weasyprint` then `--pdf-engine=xelatex` if the first fails. - **Google Drive MCP** — if `mcp__claude_ai_Google_Drive__create_file` is available and pandoc is not, upload the markdown content and tell the user to export from Google Docs. - **Python** — if neither of the above, try `python3 -c "import docx"` to check for python-docx, and generate a basic .docx programmatically. 3. If conversion succeeds, report the output file path. Clean up the temporary `.md` file. 4. If all methods fail, tell the user which tools are missing (e.g. "pandoc not found") and save as `parallax-report.md` instead. If the user doesn't reply or says anything else, stop. No further output. ## Hard rules - **Never fabricate a source URL.** If you don't have a real URL, don't write a citation. - **Never skip the confirm loop in Step 3.** The user-edit step is the point of the workflow. - **Stay in role during Step 4.** Each agent's research is a steelman from that agent's perspective, not a neutral summary. Synthesis is Step 5's job, not Step 4's. - **Never propose follow-up research runs after the report.** The export prompt (Step 6) is the only post-report output. - **Honour the search budget.** Don't let research balloon to dozens of searches per agent — it wastes tokens and dilutes the per-agent voice. - **Copyright discipline.** Paraphrase rather than quote. Any direct quote stays under 15 words, no more than one per source. ## Worked examples Full example reports are in the [`examples/`](examples/) folder: | Report | Topic | Agents | |---|---|---| | [palantir.md](examples/palantir.md) | Palantir (PLTR) stock outlook | 7 | | [iran-us-truce.md](examples/iran-us-truce.md) | Iran–US truce likelihood before end of May 2026 | 7 | | [ai-threat-to-saas.md](examples/ai-threat-to-saas.md) | Is the AI threat to SaaS significant in the short, medium, or long term? | 7 | ### How a session looks User: `/parallax "Palantir stock outlook"` Claude proposes a roster: ``` I propose this analyst team for **Palantir stock outlook**: 1. **Bull-case long investor** — Conviction holder in Palantir's government-contract moat and AIP commercial ramp. *Why:* The core thesis driving the stock's premium valuation; needs to be steelmanned on its own terms. 2. **Short-seller / bear analyst** — Focused on SBC dilution, valuation multiples at 80x trailing P/S, and mean reversion risk. *Why:* The bear case is structural, not just sentiment — SBC and FCF arguments deserve their own voice. 3. **Defense & intelligence procurement officer** — Government buyer familiar with Maven Smart System, TITAN, and classified pipelines. *Why:* Most retail analysis ignores the actual product; this grounds claims in operational reality. 4. **Enterprise software competitor** — Strategist at Snowflake or Databricks assessing where Palantir wins and loses. *Why:* Competitive positioning is the swing factor for commercial-segment growth projections. 5. **Macro / geopolitical analyst** — Assessing how DOGE budget cuts, tariffs, and global instability affect government contract flow. *Why:* Government revenue is Palantir's anchor — macro shocks are an underanalysed risk. 6. **ESG-focused institutional investor** — Fund manager weighing ImmigrationOS controversy and ethics exposure against returns. *Why:* A growing cohort of institutional capital screens on ethics; Palantir's client list is a live controversy. 7. **Retail / momentum trader** — Drawn in by Karp's public persona and the AI narrative, dip-buying at current levels. *Why:* Retail flow is a measurable price driver; explains sentiment disconnects from fundamentals. Reply with one of: - `ok` to approve (you'll choose deployment mode next) - `run sequentially` or `run in parallel` to approve and set mode in one step - `add ` to add an agent - `remove ` to drop one ``` User: `remove 7, add a former Palantir engineer` Claude re-shows the updated roster with the retail trader replaced and waits. User: `run in parallel` Claude skips the mode prompt, dispatches one subagent per agent simultaneously, collects all findings blocks, then renders the synthesized report. See the full output in [examples/palantir.md](examples/palantir.md).