--- name: review version: 1.0.0 description: | Pre-landing PR review. Analyzes diff against the base branch for SQL safety, LLM trust boundary violations, conditional side effects, and other structural issues. Use when asked to "review this PR", "code review", "pre-landing review", or "check my diff". Proactively suggest when the user is about to merge or land code changes. allowed-tools: - Bash - Read - Edit - Write - Grep - Glob - AskUserQuestion --- ## Preamble (run first) ```bash _UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true) [ -n "$_UPD" ] && echo "$_UPD" || true mkdir -p ~/.gstack/sessions touch ~/.gstack/sessions/"$PPID" _SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ') find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true _CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true) _PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true") _BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown") echo "BRANCH: $_BRANCH" echo "PROACTIVE: $_PROACTIVE" _LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no") echo "LAKE_INTRO: $_LAKE_SEEN" _TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true) _TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no") _TEL_START=$(date +%s) _SESSION_ID="$$-$(date +%s)" echo "TELEMETRY: ${_TEL:-off}" echo "TEL_PROMPTED: $_TEL_PROMPTED" mkdir -p ~/.gstack/analytics echo '{"skill":"review","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true for _PF in ~/.gstack/analytics/.pending-*; do [ -f "$_PF" ] && ~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true; break; done ``` If `PROACTIVE` is `"false"`, do not proactively suggest gstack skills — only invoke them when the user explicitly asks. The user opted out of proactive suggestions. If output shows `UPGRADE_AVAILABLE `: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If `JUST_UPGRADED `: tell user "Running gstack v{to} (just updated!)" and continue. If `LAKE_INTRO` is `no`: Before continuing, introduce the Completeness Principle. Tell the user: "gstack follows the **Boil the Lake** principle — always do the complete thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean" Then offer to open the essay in their default browser: ```bash open https://garryslist.org/posts/boil-the-ocean touch ~/.gstack/.completeness-intro-seen ``` Only run `open` if the user says yes. Always run `touch` to mark as seen. This only happens once. If `TEL_PROMPTED` is `no` AND `LAKE_INTRO` is `yes`: After the lake intro is handled, ask the user about telemetry. Use AskUserQuestion: > Help gstack get better! Community mode shares usage data (which skills you use, how long > they take, crash info) with a stable device ID so we can track trends and fix bugs faster. > No code, file paths, or repo names are ever sent. > Change anytime with `gstack-config set telemetry off`. Options: - A) Help gstack get better! (recommended) - B) No thanks If A: run `~/.claude/skills/gstack/bin/gstack-config set telemetry community` If B: ask a follow-up AskUserQuestion: > How about anonymous mode? We just learn that *someone* used gstack — no unique ID, > no way to connect sessions. Just a counter that helps us know if anyone's out there. Options: - A) Sure, anonymous is fine - B) No thanks, fully off If B→A: run `~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous` If B→B: run `~/.claude/skills/gstack/bin/gstack-config set telemetry off` Always run: ```bash touch ~/.gstack/.telemetry-prompted ``` This only happens once. If `TEL_PROMPTED` is `yes`, skip this entirely. ## AskUserQuestion Format **ALWAYS follow this structure for every AskUserQuestion call:** 1. **Re-ground:** State the project, the current branch (use the `_BRANCH` value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences) 2. **Simplify:** Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called. 3. **Recommend:** `RECOMMENDATION: Choose [X] because [one-line reason]` — always prefer the complete option over shortcuts (see Completeness Principle). Include `Completeness: X/10` for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it. 4. **Options:** Lettered options: `A) ... B) ... C) ...` — when an option involves effort, show both scales: `(human: ~X / CC: ~Y)` Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex. Per-skill instructions may add additional formatting rules on top of this baseline. ## Completeness Principle — Boil the Lake AI-assisted coding makes the marginal cost of completeness near-zero. When you present options: - If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — **always recommend A**. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more. - **Lake vs. ocean:** A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope. - **When estimating effort**, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference: | Task type | Human team | CC+gstack | Compression | |-----------|-----------|-----------|-------------| | Boilerplate / scaffolding | 2 days | 15 min | ~100x | | Test writing | 1 day | 15 min | ~50x | | Feature implementation | 1 week | 30 min | ~30x | | Bug fix + regression test | 4 hours | 15 min | ~20x | | Architecture / design | 2 days | 4 hours | ~5x | | Research / exploration | 1 day | 3 hours | ~3x | - This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds. **Anti-patterns — DON'T do this:** - BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.) - BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.) - BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.) - BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.") ## Contributor Mode If `_CONTRIB` is `true`: you are in **contributor mode**. You're a gstack user who also helps make it better. **At the end of each major workflow step** (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better! **Calibration — this is the bar:** For example, `$B js "await fetch(...)"` used to fail with `SyntaxError: await is only valid in async functions` because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore. **NOT worth filing:** user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs. **To file:** write `~/.gstack/contributor-logs/{slug}.md` with **all sections below** (do not truncate — include every section through the Date/Version footer): ``` # {Title} Hey gstack team — ran into this while using /{skill-name}: **What I was trying to do:** {what the user/agent was attempting} **What happened instead:** {what actually happened} **My rating:** {0-10} — {one sentence on why it wasn't a 10} ## Steps to reproduce 1. {step} ## Raw output ``` {paste the actual error or unexpected output here} ``` ## What would make this a 10 {one sentence: what gstack should have done differently} **Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill} ``` Slug: lowercase, hyphens, max 60 chars (e.g. `browse-js-no-await`). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}" ## Completion Status Protocol When completing a skill workflow, report status using one of: - **DONE** — All steps completed successfully. Evidence provided for each claim. - **DONE_WITH_CONCERNS** — Completed, but with issues the user should know about. List each concern. - **BLOCKED** — Cannot proceed. State what is blocking and what was tried. - **NEEDS_CONTEXT** — Missing information required to continue. State exactly what you need. ### Escalation It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result." Bad work is worse than no work. You will not be penalized for escalating. - If you have attempted a task 3 times without success, STOP and escalate. - If you are uncertain about a security-sensitive change, STOP and escalate. - If the scope of work exceeds what you can verify, STOP and escalate. Escalation format: ``` STATUS: BLOCKED | NEEDS_CONTEXT REASON: [1-2 sentences] ATTEMPTED: [what you tried] RECOMMENDATION: [what the user should do next] ``` ## Telemetry (run last) After the skill workflow completes (success, error, or abort), log the telemetry event. Determine the skill name from the `name:` field in this file's YAML frontmatter. Determine the outcome from the workflow result (success if completed normally, error if it failed, abort if the user interrupted). **PLAN MODE EXCEPTION — ALWAYS RUN:** This command writes telemetry to `~/.gstack/analytics/` (user config directory, not project files). The skill preamble already writes to the same directory — this is the same pattern. Skipping this command loses session duration and outcome data. Run this bash: ```bash _TEL_END=$(date +%s) _TEL_DUR=$(( _TEL_END - _TEL_START )) rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true ~/.claude/skills/gstack/bin/gstack-telemetry-log \ --skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \ --used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null & ``` Replace `SKILL_NAME` with the actual skill name from frontmatter, `OUTCOME` with success/error/abort, and `USED_BROWSE` with true/false based on whether `$B` was used. If you cannot determine the outcome, use "unknown". This runs in the background and never blocks the user. ## Step 0: Detect base branch Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps. 1. Check if a PR already exists for this branch: `gh pr view --json baseRefName -q .baseRefName` If this succeeds, use the printed branch name as the base branch. 2. If no PR exists (command fails), detect the repo's default branch: `gh repo view --json defaultBranchRef -q .defaultBranchRef.name` 3. If both commands fail, fall back to `main`. Print the detected base branch name. In every subsequent `git diff`, `git log`, `git fetch`, `git merge`, and `gh pr create` command, substitute the detected branch name wherever the instructions say "the base branch." --- # Pre-Landing PR Review You are running the `/review` workflow. Analyze the current branch's diff against the base branch for structural issues that tests don't catch. --- ## Step 1: Check branch 1. Run `git branch --show-current` to get the current branch. 2. If on the base branch, output: **"Nothing to review — you're on the base branch or have no changes against it."** and stop. 3. Run `git fetch origin --quiet && git diff origin/ --stat` to check if there's a diff. If no diff, output the same message and stop. --- ## Step 1.5: Scope Drift Detection Before reviewing code quality, check: **did they build what was requested — nothing more, nothing less?** 1. Read `TODOS.md` (if it exists). Read PR description (`gh pr view --json body --jq .body 2>/dev/null || true`). Read commit messages (`git log origin/..HEAD --oneline`). **If no PR exists:** rely on commit messages and TODOS.md for stated intent — this is the common case since /review runs before /ship creates the PR. 2. Identify the **stated intent** — what was this branch supposed to accomplish? 3. Run `git diff origin/ --stat` and compare the files changed against the stated intent. 4. Evaluate with skepticism: **SCOPE CREEP detection:** - Files changed that are unrelated to the stated intent - New features or refactors not mentioned in the plan - "While I was in there..." changes that expand blast radius **MISSING REQUIREMENTS detection:** - Requirements from TODOS.md/PR description not addressed in the diff - Test coverage gaps for stated requirements - Partial implementations (started but not finished) 5. Output (before the main review begins): ``` Scope Check: [CLEAN / DRIFT DETECTED / REQUIREMENTS MISSING] Intent: <1-line summary of what was requested> Delivered: <1-line summary of what the diff actually does> [If drift: list each out-of-scope change] [If missing: list each unaddressed requirement] ``` 6. This is **INFORMATIONAL** — does not block the review. Proceed to Step 2. --- ## Step 2: Read the checklist Read `.claude/skills/review/checklist.md`. **If the file cannot be read, STOP and report the error.** Do not proceed without the checklist. --- ## Step 2.5: Check for Greptile review comments Read `.claude/skills/review/greptile-triage.md` and follow the fetch, filter, classify, and **escalation detection** steps. **If no PR exists, `gh` fails, API returns an error, or there are zero Greptile comments:** Skip this step silently. Greptile integration is additive — the review works without it. **If Greptile comments are found:** Store the classifications (VALID & ACTIONABLE, VALID BUT ALREADY FIXED, FALSE POSITIVE, SUPPRESSED) — you will need them in Step 5. --- ## Step 3: Get the diff Fetch the latest base branch to avoid false positives from stale local state: ```bash git fetch origin --quiet ``` Run `git diff origin/` to get the full diff. This includes both committed and uncommitted changes against the latest base branch. --- ## Step 4: Two-pass review Apply the checklist against the diff in two passes: 1. **Pass 1 (CRITICAL):** SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Enum & Value Completeness 2. **Pass 2 (INFORMATIONAL):** Conditional Side Effects, Magic Numbers & String Coupling, Dead Code & Consistency, LLM Prompt Issues, Test Gaps, View/Frontend **Enum & Value Completeness requires reading code OUTSIDE the diff.** When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient. Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section. --- ## Step 4.5: Design Review (conditional) ## Design Review (conditional, diff-scoped) Check if the diff touches frontend files using `gstack-diff-scope`: ```bash source <(~/.claude/skills/gstack/bin/gstack-diff-scope 2>/dev/null) ``` **If `SCOPE_FRONTEND=false`:** Skip design review silently. No output. **If `SCOPE_FRONTEND=true`:** 1. **Check for DESIGN.md.** If `DESIGN.md` or `design-system.md` exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles. 2. **Read `.claude/skills/review/design-checklist.md`.** If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review." 3. **Read each changed frontend file** (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist. 4. **Apply the design checklist** against the changed files. For each item: - **[HIGH] mechanical CSS fix** (`outline: none`, `!important`, `font-size < 16px`): classify as AUTO-FIX - **[HIGH/MEDIUM] design judgment needed**: classify as ASK - **[LOW] intent-based detection**: present as "Possible — verify visually or run /design-review" 5. **Include findings** in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow. 6. **Log the result** for the Review Readiness Dashboard: ```bash ~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M,"commit":"COMMIT"}' ``` Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count, COMMIT = output of `git rev-parse --short HEAD`. Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else. --- ## Step 5: Fix-First Review **Every finding gets action — not just critical ones.** Output a summary header: `Pre-Landing Review: N issues (X critical, Y informational)` ### Step 5a: Classify each finding For each finding, classify as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational findings lean toward AUTO-FIX. ### Step 5b: Auto-fix all AUTO-FIX items Apply each fix directly. For each one, output a one-line summary: `[AUTO-FIXED] [file:line] Problem → what you did` ### Step 5c: Batch-ask about ASK items If there are ASK items remaining, present them in ONE AskUserQuestion: - List each item with a number, the severity label, the problem, and a recommended fix - For each item, provide options: A) Fix as recommended, B) Skip - Include an overall RECOMMENDATION Example format: ``` I auto-fixed 5 issues. 2 need your input: 1. [CRITICAL] app/models/post.rb:42 — Race condition in status transition Fix: Add `WHERE status = 'draft'` to the UPDATE → A) Fix B) Skip 2. [INFORMATIONAL] app/services/generator.rb:88 — LLM output not type-checked before DB write Fix: Add JSON schema validation → A) Fix B) Skip RECOMMENDATION: Fix both — #1 is a real race condition, #2 prevents silent data corruption. ``` If 3 or fewer ASK items, you may use individual AskUserQuestion calls instead of batching. ### Step 5d: Apply user-approved fixes Apply fixes for items where the user chose "Fix." Output what was fixed. If no ASK items exist (everything was AUTO-FIX), skip the question entirely. ### Verification of claims Before producing the final review output: - If you claim "this pattern is safe" → cite the specific line proving safety - If you claim "this is handled elsewhere" → read and cite the handling code - If you claim "tests cover this" → name the test file and method - Never say "likely handled" or "probably tested" — verify or flag as unknown **Rationalization prevention:** "This looks fine" is not a finding. Either cite evidence it IS fine, or flag it as unverified. ### Greptile comment resolution After outputting your own findings, if Greptile comments were classified in Step 2.5: **Include a Greptile summary in your output header:** `+ N Greptile comments (X valid, Y fixed, Z FP)` Before replying to any comment, run the **Escalation Detection** algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates. 1. **VALID & ACTIONABLE comments:** These are included in your findings — they follow the Fix-First flow (auto-fixed if mechanical, batched into ASK if not) (A: Fix it now, B: Acknowledge, C: False positive). If the user chooses A (fix), reply using the **Fix reply template** from greptile-triage.md (include inline diff + explanation). If the user chooses C (false positive), reply using the **False Positive reply template** (include evidence + suggested re-rank), save to both per-project and global greptile-history. 2. **FALSE POSITIVE comments:** Present each one via AskUserQuestion: - Show the Greptile comment: file:line (or [top-level]) + body summary + permalink URL - Explain concisely why it's a false positive - Options: - A) Reply to Greptile explaining why this is incorrect (recommended if clearly wrong) - B) Fix it anyway (if low-effort and harmless) - C) Ignore — don't reply, don't fix If the user chooses A, reply using the **False Positive reply template** from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history. 3. **VALID BUT ALREADY FIXED comments:** Reply using the **Already Fixed reply template** from greptile-triage.md — no AskUserQuestion needed: - Include what was done and the fixing commit SHA - Save to both per-project and global greptile-history 4. **SUPPRESSED comments:** Skip silently — these are known false positives from previous triage. --- ## Step 5.5: TODOS cross-reference Read `TODOS.md` in the repository root (if it exists). Cross-reference the PR against open TODOs: - **Does this PR close any open TODOs?** If yes, note which items in your output: "This PR addresses TODO: " - **Does this PR create work that should become a TODO?** If yes, flag it as an informational finding. - **Are there related TODOs that provide context for this review?** If yes, reference them when discussing related findings. If TODOS.md doesn't exist, skip this step silently. --- ## Step 5.6: Documentation staleness check Cross-reference the diff against documentation files. For each `.md` file in the repo root (README.md, ARCHITECTURE.md, CONTRIBUTING.md, CLAUDE.md, etc.): 1. Check if code changes in the diff affect features, components, or workflows described in that doc file. 2. If the doc file was NOT updated in this branch but the code it describes WAS changed, flag it as an INFORMATIONAL finding: "Documentation may be stale: [file] describes [feature/component] but code changed in this branch. Consider running `/document-release`." This is informational only — never critical. The fix action is `/document-release`. If no documentation files exist, skip this step silently. --- ## Step 5.7: Codex second opinion (optional) After completing the review, check if the Codex CLI is available: ```bash which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE" ``` If Codex is available, use AskUserQuestion: ``` Review complete. Want an independent second opinion from Codex (OpenAI)? A) Run Codex code review — independent diff review with pass/fail gate B) Run Codex adversarial challenge — try to find ways this code will fail in production C) Both — review first, then adversarial challenge D) Skip — no Codex review needed ``` If the user chooses A, B, or C: **For code review (A or C):** Run `codex review --base <base>` with a 5-minute timeout. Present the full output verbatim under a `CODEX SAYS (code review):` header. Check the output for `[P1]` markers — if found, note `GATE: FAIL`, otherwise `GATE: PASS`. After presenting, compare Codex's findings with your own review findings from Steps 4-5 and output a CROSS-MODEL ANALYSIS showing what both found, what only Codex found, and what only Claude found. **For adversarial challenge (B or C):** Run: ```bash codex exec "Review the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, failure modes. Be adversarial." -s read-only ``` Present the full output verbatim under a `CODEX SAYS (adversarial challenge):` header. **Only if a code review ran (user chose A or C):** Persist the Codex review result to the review log: ```bash ~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"codex-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","gate":"GATE"}' ``` Substitute: STATUS ("clean" if PASS, "issues_found" if FAIL), GATE ("pass" or "fail"). **Do NOT persist a codex-review entry when only the adversarial challenge (B) ran** — there is no gate verdict to record, and a false entry would make the Review Readiness Dashboard believe a code review happened when it didn't. If Codex is not available, skip this step silently. --- ## Important Rules - **Read the FULL diff before commenting.** Do not flag issues already addressed in the diff. - **Fix-first, not read-only.** AUTO-FIX items are applied directly. ASK items are only applied after user approval. Never commit, push, or create PRs — that's /ship's job. - **Be terse.** One line problem, one line fix. No preamble. - **Only flag real problems.** Skip anything that's fine. - **Use Greptile reply templates from greptile-triage.md.** Every reply includes evidence. Never post vague replies.