--- name: browser-debugging description: Systematically tests UI functionality, validates design fidelity with AI visual analysis, monitors console output, tracks network requests, and provides debugging reports using Chrome Extension MCP tools. Use after implementing UI features, for design validation, when investigating console errors, for regression testing, or when user mentions testing, browser bugs, console errors, or UI verification. --- # Browser Debugging This Skill provides comprehensive browser-based UI testing, visual analysis, and debugging capabilities using Claude-in-Chrome Extension MCP tools and optional external vision models via Claudish. ## When to Use This Skill Claude and agents (developer, reviewer, tester, ui-developer) should invoke this Skill when: - **Validating Own Work**: After implementing UI features, agents should verify their work in a real browser - **Design Fidelity Checks**: Comparing implementation screenshots against design references - **Visual Regression Testing**: Detecting layout shifts, styling issues, or visual bugs - **Console Error Investigation**: User reports console errors or warnings - **Form/Interaction Testing**: Verifying user interactions work correctly - **Pre-Commit Verification**: Before committing or deploying code - **Bug Reproduction**: User describes UI bugs that need investigation ## Prerequisites ### Required: Claude-in-Chrome Extension This skill requires Claude-in-Chrome Extension MCP. The extension provides browser automation tools directly through Claude. **Check if available**: The tools are available when the extension is installed and active. Look for `mcp__claude-in-chrome__*` tools in your available MCP tools. ### Optional: External Vision Models (via OpenRouter) For advanced visual analysis, use external vision-language models via Claudish: ```bash # Check OpenRouter API key [[ -n "${OPENROUTER_API_KEY}" ]] && echo "OpenRouter configured" || echo "Not configured" # Install claudish npm install -g claudish ``` --- ## Visual Analysis Models (Recommended) For best visual analysis of UI screenshots, use these models via Claudish: ### Tier 1: Best Quality (Recommended for Design Validation) | Model | Strengths | Cost | Best For | |-------|-----------|------|----------| | **qwen/qwen3-vl-32b-instruct** | Best OCR, spatial reasoning, GUI automation, 32+ languages | ~$0.06/1M input | Design fidelity, OCR, element detection | | **google/gemini-2.5-flash** | Fast, excellent price/performance, 1M context | ~$0.05/1M input | Real-time validation, large pages | | **openai/gpt-4o** | Most fluid multimodal, strong all-around | ~$0.15/1M input | Complex visual reasoning | ### Tier 2: Fast & Affordable | Model | Strengths | Cost | Best For | |-------|-----------|------|----------| | **qwen/qwen3-vl-30b-a3b-instruct** | Good balance, MoE architecture | ~$0.04/1M input | Quick checks, multiple iterations | | **google/gemini-2.5-flash-lite** | Ultrafast, very cheap | ~$0.01/1M input | High-volume testing | ### Tier 3: Free Options | Model | Notes | |-------|-------| | **openrouter/polaris-alpha** | FREE, good for testing workflows | ### Model Selection Guide ``` Design Fidelity Validation → qwen/qwen3-vl-32b-instruct (best OCR & spatial) Quick Smoke Tests → google/gemini-2.5-flash (fast & cheap) Complex Layout Analysis → openai/gpt-4o (best reasoning) High Volume Testing → google/gemini-2.5-flash-lite (ultrafast) Budget Conscious → openrouter/polaris-alpha (free) ``` --- ## Recipe 1: Agent Self-Validation (After Implementation) **Use Case**: Developer/UI-Developer agent validates their own work after implementing a feature. ### Pattern: Implement → Screenshot → Analyze → Report ```markdown ## After Implementing UI Feature 1. **Save file changes** (Edit tool) 2. **Capture implementation screenshot**: \`\`\` mcp__claude-in-chrome__navigate(url: "http://localhost:5173/your-route") # Wait for page load mcp__claude-in-chrome__computer(action: "screenshot") \`\`\` 3. **Analyze with embedded Claude** (always available): - Describe what you see in the screenshot - Check for obvious layout issues - Verify expected elements are present 4. **Optional: Enhanced analysis with vision model**: \`\`\`bash # Use Qwen VL for detailed visual analysis npx claudish --model qwen/qwen3-vl-32b-instruct --stdin --quiet <= 400) \`\`\` 7. **Report results to orchestrator** ``` ### Quick Self-Check (5-Point Validation) Agents should perform this quick check after any UI implementation: ```markdown ## Quick Self-Validation Checklist □ 1. Screenshot shows expected UI elements □ 2. No console errors (check: mcp__claude-in-chrome__read_console_messages) □ 3. No network failures (check: mcp__claude-in-chrome__read_network_requests) □ 4. Interactive elements respond correctly □ 5. Visual styling matches expectations ``` --- ## Recipe 2: Design Fidelity Validation **Use Case**: Compare implementation against Figma design or design reference. ### Pattern: Design Reference → Implementation → Visual Diff ```markdown ## Design Fidelity Check ### Step 1: Capture Implementation \`\`\` mcp__claude-in-chrome__navigate(url: "http://localhost:5173/component") mcp__claude-in-chrome__resize_window(width: 1440, height: 900) mcp__claude-in-chrome__computer(action: "screenshot") \`\`\` ### Step 2: Visual Analysis with Vision Model \`\`\`bash npx claudish --model qwen/qwen3-vl-32b-instruct --stdin --quiet <