--- name: browser-debugger description: Systematically tests UI functionality, validates design fidelity with AI visual analysis, monitors console output, tracks network requests, and provides debugging reports using Chrome DevTools MCP. Use after implementing UI features, for design validation, when investigating console errors, for regression testing, or when user mentions testing, browser bugs, console errors, or UI verification. allowed-tools: Task, Bash --- # Browser Debugger This Skill provides comprehensive browser-based UI testing, visual analysis, and debugging capabilities using Chrome DevTools MCP server and optional external vision models via Claudish. ## When to Use This Skill Claude and agents (developer, reviewer, tester, ui-developer) should invoke this Skill when: - **Validating Own Work**: After implementing UI features, agents should verify their work in a real browser - **Design Fidelity Checks**: Comparing implementation screenshots against design references - **Visual Regression Testing**: Detecting layout shifts, styling issues, or visual bugs - **Console Error Investigation**: User reports console errors or warnings - **Form/Interaction Testing**: Verifying user interactions work correctly - **Pre-Commit Verification**: Before committing or deploying code - **Bug Reproduction**: User describes UI bugs that need investigation ## Prerequisites ### Required: Chrome DevTools MCP This skill requires Chrome DevTools MCP. Check availability and install if needed: ```bash # Check if available mcp__chrome-devtools__list_pages 2>/dev/null && echo "Available" || echo "Not available" # Install via claudeup (recommended) npm install -g claudeup@latest claudeup mcp add chrome-devtools ``` ### Optional: External Vision Models (via OpenRouter) For advanced visual analysis, use external vision-language models via Claudish: ```bash # Check OpenRouter API key [[ -n "${OPENROUTER_API_KEY}" ]] && echo "OpenRouter configured" || echo "Not configured" # Install claudish npm install -g claudish ``` --- ## Visual Analysis Models (Recommended) For best visual analysis of UI screenshots, use these models via Claudish: ### Tier 1: Best Quality (Recommended for Design Validation) | Model | Strengths | Cost | Best For | |-------|-----------|------|----------| | **qwen/qwen3-vl-32b-instruct** | Best OCR, spatial reasoning, GUI automation, 32+ languages | ~$0.06/1M input | Design fidelity, OCR, element detection | | **google/gemini-2.5-flash** | Fast, excellent price/performance, 1M context | ~$0.05/1M input | Real-time validation, large pages | | **openai/gpt-4o** | Most fluid multimodal, strong all-around | ~$0.15/1M input | Complex visual reasoning | ### Tier 2: Fast & Affordable | Model | Strengths | Cost | Best For | |-------|-----------|------|----------| | **qwen/qwen3-vl-30b-a3b-instruct** | Good balance, MoE architecture | ~$0.04/1M input | Quick checks, multiple iterations | | **google/gemini-2.5-flash-lite** | Ultrafast, very cheap | ~$0.01/1M input | High-volume testing | ### Tier 3: Free Options | Model | Notes | |-------|-------| | **openrouter/polaris-alpha** | FREE, good for testing workflows | ### Model Selection Guide ``` Design Fidelity Validation → qwen/qwen3-vl-32b-instruct (best OCR & spatial) Quick Smoke Tests → google/gemini-2.5-flash (fast & cheap) Complex Layout Analysis → openai/gpt-4o (best reasoning) High Volume Testing → google/gemini-2.5-flash-lite (ultrafast) Budget Conscious → openrouter/polaris-alpha (free) ``` --- ## Visual Analysis Model Selection (Interactive) **Before the first screenshot analysis in a session, ask the user which model to use.** ### Step 1: Check for Saved Preference First, check if user has a saved model preference: ```bash # Check for saved preference in project settings SAVED_MODEL=$(cat .claude/settings.json 2>/dev/null | jq -r '.pluginSettings.frontend.visualAnalysisModel // empty') # Or check session-specific preference if [[ -f "ai-docs/sessions/${SESSION_ID}/session-meta.json" ]]; then SESSION_MODEL=$(jq -r '.visualAnalysisModel // empty' "ai-docs/sessions/${SESSION_ID}/session-meta.json") fi ``` ### Step 2: If No Saved Preference, Ask User Use **AskUserQuestion** with these options: ```markdown ## Visual Analysis Model Selection For screenshot analysis and design validation, which AI vision model would you like to use? **Your choice will be remembered for this session.** ``` **AskUserQuestion options:** | Option | Label | Description | |--------|-------|-------------| | 1 | `qwen/qwen3-vl-32b-instruct` (Recommended) | Best for design fidelity - excellent OCR, spatial reasoning, detailed analysis. ~$0.06/1M tokens | | 2 | `google/gemini-2.5-flash` | Fast & affordable - great balance of speed and quality. ~$0.05/1M tokens | | 3 | `openai/gpt-4o` | Most capable - best for complex visual reasoning. ~$0.15/1M tokens | | 4 | `openrouter/polaris-alpha` (Free) | No cost - good for testing, basic analysis | | 5 | Skip visual analysis | Use embedded Claude only (no external models) | **Recommended based on task type:** - Design validation → Option 1 (Qwen VL) - Quick iterations → Option 2 (Gemini Flash) - Complex layouts → Option 3 (GPT-4o) - Budget-conscious → Option 4 (Free) ### Step 3: Save User's Choice After user selects, save their preference: **Option A: Save to Session (temporary)** ```bash # Update session metadata jq --arg model "$SELECTED_MODEL" '.visualAnalysisModel = $model' \ "ai-docs/sessions/${SESSION_ID}/session-meta.json" > tmp.json && \ mv tmp.json "ai-docs/sessions/${SESSION_ID}/session-meta.json" ``` **Option B: Save to Project Settings (persistent)** ```bash # Update project settings for future sessions jq --arg model "$SELECTED_MODEL" \ '.pluginSettings.frontend.visualAnalysisModel = $model' \ .claude/settings.json > tmp.json && mv tmp.json .claude/settings.json ``` ### Step 4: Use Selected Model Store the selected model in a variable and use it for all subsequent visual analysis: ```bash # VISUAL_MODEL is now set to user's choice # Use it in all claudish calls: npx claudish --model "$VISUAL_MODEL" --stdin --quiet <