--- name: qcsd-ideation-swarm description: "QCSD Ideation phase swarm for Quality Criteria sessions using HTSM v6.3, Risk Storming, and Testability analysis before development begins. Uses 5-tier browser cascade: Vibium → agent-browser → Playwright+Stealth → WebFetch → WebSearch-fallback." category: qcsd-phases priority: critical version: 7.5.1 tokenEstimate: 3500 # DDD Domain Mapping (from QCSD-AGENTIC-QE-MAPPING-FRAMEWORK.md) domains: primary: - domain: requirements-validation agents: [qe-quality-criteria-recommender, qe-requirements-validator] - domain: coverage-analysis agents: [qe-risk-assessor] conditional: - domain: security-compliance agents: [qe-security-auditor] - domain: visual-accessibility agents: [qe-accessibility-auditor] - domain: cross-domain agents: [qe-qx-partner] - domain: enterprise-integration agents: [qe-middleware-validator, qe-sap-rfc-tester, qe-sod-analyzer] # Agent Inventory agents: core: [qe-quality-criteria-recommender, qe-risk-assessor, qe-requirements-validator] conditional: [qe-accessibility-auditor, qe-security-auditor, qe-qx-partner, qe-middleware-validator, qe-sap-rfc-tester, qe-sod-analyzer] total: 9 sub_agents: 0 skills: [testability-scoring, risk-based-testing, context-driven-testing, holistic-testing-pact] # Execution Models (Task Tool is PRIMARY) execution: primary: task-tool alternatives: [mcp-tools, cli] swarm_pattern: true parallel_batches: 2 last_updated: 2026-01-28 # v7.5.1 Changelog: Added prominent follow-up recommendation box at end of swarm execution (Phase URL-9) # v7.5.0 Changelog: Added HAS_VIDEO flag detection with /a11y-ally follow-up recommendation for video caption generation # v7.4.0 Changelog: Automated browser cascade via scripts/fetch-content.js with 30s per-tier timeouts # v7.2.0 Changelog: Added 5-tier browser cascade (Vibium → agent-browser → Playwright+Stealth → WebFetch → WebSearch) html_output: true enforcement_level: strict tags: [qcsd, ideation, htsm, quality-criteria, risk-storming, testability, swarm, parallel, ddd] trust_tier: 3 validation: schema_path: schemas/output.json validator_path: scripts/validate-config.json eval_path: evals/qcsd-ideation-swarm.yaml --- # QCSD Ideation Swarm v7.0 Shift-left quality engineering swarm for PI Planning and Sprint Planning. --- ## URL-Based Analysis Mode (v7.1) When analyzing a live website URL, use this specialized execution pattern. ### Parameters - `URL`: Website to analyze (required) - `OUTPUT_FOLDER`: Where to save reports (default: `${PROJECT_ROOT}/Agentic QCSD/{domain}/` or `./Agentic QCSD/{domain}/`) --- ## ⛔ URL MODE: COMPLETE EXECUTION FLOW **You MUST follow ALL phases in order. Skipping phases is a FAILURE.** ### PHASE URL-1: Setup and Content Fetch (AUTOMATED CASCADE) **The browser cascade is now FULLY AUTOMATED via `scripts/fetch-content.js`.** **Single command - automatic tier fallback with 30s timeout per tier:** ```bash # SINGLE COMMAND - handles all tiers automatically: # Use npx for installed package, or node with relative path for local development npx aqe fetch-content "${URL}" "${OUTPUT_FOLDER}" --timeout 30000 # OR if running from project root: node ./scripts/fetch-content.js "${URL}" "${OUTPUT_FOLDER}" --timeout 30000 ``` **What the script does automatically:** 1. Creates output folder 2. Tries Playwright+Stealth (30s timeout) 3. Falls back to HTTP Fetch (30s timeout) 4. Falls back to WebSearch placeholder (30s timeout) 5. Saves `content.html`, `screenshot.png`, and `fetch-result.json` **Execution:** ```javascript // 1. Run the automated fetch cascade (use relative path from project root) const fetchResult = Bash({ command: `node ./scripts/fetch-content.js "${URL}" "${OUTPUT_FOLDER}" --timeout 30000`, timeout: 120000 // 2 min total max }) // 2. Parse the JSON result from stdout const result = JSON.parse(fetchResult.stdout) // 3. Read the content const content = Read({ file_path: `${OUTPUT_FOLDER}/content.html` }) const fetchMethod = result.tier const contentSize = result.contentSize ``` **If script is not available, fall back to inline Playwright:** ```javascript // FALLBACK: Only if scripts/fetch-content.js doesn't exist Bash({ command: `mkdir -p "${OUTPUT_FOLDER}"` }) // Quick Playwright fetch (single tier, no cascade) Bash({ command: `cd /tmp && rm -rf qcsd-fetch && mkdir qcsd-fetch && cd qcsd-fetch && npm init -y && npm install playwright-extra puppeteer-extra-plugin-stealth playwright 2>/dev/null`, timeout: 60000 }) // ... minimal inline script as last resort ``` **MANDATORY: Output fetch method used:** ``` ┌─────────────────────────────────────────────────────────────┐ │ CONTENT FETCH RESULT │ ├─────────────────────────────────────────────────────────────┤ │ Method Used: [vibium/agent-browser/playwright/webfetch/ │ │ websearch-fallback] │ │ Content Size: [X KB] │ │ Status: [SUCCESS/DEGRADED] │ │ │ │ If DEGRADED (websearch-fallback), analysis is based on │ │ public information, not live page inspection. │ └─────────────────────────────────────────────────────────────┘ ``` ### PHASE URL-2: Programmatic Flag Detection (MANDATORY) **You MUST detect flags from the fetched content. Do NOT skip this phase.** ```javascript // Detect HAS_UI const HAS_UI = ( /<(form|button|input|select|textarea|img|video|canvas|nav|header|footer|aside)/i.test(content) || /carousel|slider|modal|dialog|dropdown|menu|tab|accordion/i.test(content) || /class=["'][^"']*btn|button|card|grid|flex/i.test(content) ); // Detect HAS_SECURITY const HAS_SECURITY = ( /login|password|auth|token|session|credential|oauth|jwt|sso/i.test(content) || /newsletter|subscribe|signup|email.*input|register/i.test(content) || // PII collection /payment|checkout|credit.*card|billing/i.test(content) || /cookie|consent|gdpr|privacy/i.test(content) ); // Detect HAS_UX const HAS_UX = ( /user|customer|visitor|journey|experience|engagement/i.test(content) || /
2 → NO-GO (reason: "Too many critical risks") STEP 2: Check GO conditions (ALL required for GO) ───────────────────────────────────────────────── IF testabilityScore >= 80 AND htsmCoverage >= 8 AND acCompleteness >= 90 AND criticalRisks == 0 → GO STEP 3: Default ───────────────────────────────────────────────── ELSE → CONDITIONAL ``` ### Decision Recording ``` METRICS: - testabilityScore = __/100 - htsmCoverage = __/10 - acCompleteness = __% - criticalRisks = __ NO-GO CHECK: - testabilityScore < 40? __ (YES/NO) - htsmCoverage < 6? __ (YES/NO) - acCompleteness < 50? __ (YES/NO) - criticalRisks > 2? __ (YES/NO) GO CHECK (only if no NO-GO triggered): - testabilityScore >= 80? __ (YES/NO) - htsmCoverage >= 8? __ (YES/NO) - acCompleteness >= 90? __ (YES/NO) - criticalRisks == 0? __ (YES/NO) FINAL RECOMMENDATION: [GO / CONDITIONAL / NO-GO] REASON: ___ ``` --- ## PHASE 6: Generate Ideation Report ### ⛔ ENFORCEMENT: COMPLETE REPORT STRUCTURE **ALL sections below are MANDATORY. No abbreviations.** ```markdown # QCSD Ideation Report: [Epic Name] **Generated**: [Date/Time] **Recommendation**: [GO / CONDITIONAL / NO-GO] **Agents Executed**: [List all agents that ran] --- ## Executive Summary | Metric | Value | Threshold | Status | |--------|-------|-----------|--------| | HTSM Coverage | X/10 | ≥8 | ✅/⚠️/❌ | | Testability Score | X% | ≥80% | ✅/⚠️/❌ | | AC Completeness | X% | ≥90% | ✅/⚠️/❌ | | Critical Risks | X | 0 | ✅/⚠️/❌ | **Recommendation Rationale**: [1-2 sentences explaining why GO/CONDITIONAL/NO-GO] --- ## Quality Criteria Analysis (HTSM v6.3) [EMBED or LINK the HTML report from qe-quality-criteria-recommender] ### Priority Items Summary | Priority | Count | Categories | |----------|-------|------------| | P0 (Critical) | X | [list] | | P1 (High) | X | [list] | | P2 (Medium) | X | [list] | | P3 (Low) | X | [list] | ### Cross-Cutting Concerns [List any concerns that span multiple categories] --- ## Risk Assessment ### Risk Matrix | ID | Risk | Category | L | I | Score | Mitigation | |----|------|----------|---|---|-------|------------| [ALL risks from qe-risk-assessor, sorted by score] ### Critical Risks (Score ≥ 15) [Highlight critical risks with detailed mitigation plans] ### Risk Distribution - Technical: X risks - Business: X risks - Quality: X risks - Integration: X risks --- ## Requirements Validation ### Testability Score: X/100 | Principle | Score | Notes | |-----------|-------|-------| [All 10 principles from qe-requirements-validator] ### AC Completeness: X% | AC | Status | Issues | |----|--------|--------| [All ACs evaluated] ### Gaps Identified 1. [Gap 1] 2. [Gap 2] [All gaps from qe-requirements-validator] --- ## Conditional Analysis [INCLUDE ONLY IF APPLICABLE - based on which conditional agents ran] ### Accessibility Review (IF HAS_UI) [Full output from qe-accessibility-auditor] ### Security Assessment (IF HAS_SECURITY) [Full output from qe-security-auditor] ### Quality Experience (IF HAS_UX) [Full output from qe-qx-partner] --- ## Recommended Next Steps ### Immediate Actions (Before Development) - [ ] [Action based on findings] - [ ] [Action based on findings] ### During Development - [ ] [Action based on findings] ### Pre-Release - [ ] [Action based on findings] --- ## Appendix: Agent Outputs [Link to or embed full outputs from each agent] --- *Generated by QCSD Ideation Swarm v6.1* *Execution Model: Task Tool Parallel Swarm* ``` ### Report Validation Checklist Before presenting report: ``` ✓ Executive Summary table is complete with all 4 metrics ✓ Recommendation matches decision logic output ✓ Quality Criteria section includes priority summary ✓ Risk Matrix includes ALL identified risks ✓ Testability score shows all 10 principles ✓ All gaps are listed ✓ Conditional sections included for all spawned agents ✓ Next steps are specific and actionable (not generic) ``` **❌ DO NOT present an incomplete report.** --- ## PHASE 7: Store Learnings & Persist State ### Purpose Store ideation findings for: - Cross-phase feedback loops (Production → next Ideation cycle) - Historical analysis of GO/CONDITIONAL/NO-GO decisions - Pattern learning across epics ### Option A: MCP Memory Tools (RECOMMENDED) ```javascript // Store ideation findings mcp__agentic-qe__memory_store({ key: `qcsd-ideation-${epicId}-${Date.now()}`, namespace: "qcsd-ideation", value: { epicId: epicId, epicName: epicName, recommendation: recommendation, // GO, CONDITIONAL, NO-GO metrics: { htsmCoverage: htsmCoverage, testabilityScore: testabilityScore, acCompleteness: acCompleteness, criticalRisks: criticalRisks }, domains: { requirementsValidation: true, coverageAnalysis: true, securityCompliance: HAS_SECURITY, visualAccessibility: HAS_UI, crossDomain: HAS_UX }, agentsInvoked: agentList, timestamp: new Date().toISOString() } }) // Share learnings with learning coordinator for cross-domain patterns mcp__agentic-qe__memory_share({ sourceAgentId: "qcsd-ideation-swarm", targetAgentIds: ["qe-learning-coordinator", "qe-pattern-learner"], knowledgeDomain: "ideation-patterns" }) // Query previous ideation results for similar epics mcp__agentic-qe__memory_query({ pattern: "qcsd-ideation-*", namespace: "qcsd-ideation" }) ``` ### Option B: CLI Memory Commands ```bash # Store ideation findings npx @claude-flow/cli@latest memory store \ --key "qcsd-ideation-${EPIC_ID}" \ --value '{"recommendation":"GO","testabilityScore":85,"htsmCoverage":9}' \ --namespace qcsd-ideation # Search for similar epics npx @claude-flow/cli@latest memory search \ --query "ideation recommendation" \ --namespace qcsd-ideation # List all ideation records npx @claude-flow/cli@latest memory list \ --namespace qcsd-ideation # Post-task hook for learning npx @claude-flow/cli@latest hooks post-task \ --task-id "qcsd-ideation-${EPIC_ID}" \ --success true ``` ### Option C: Direct File Storage (Fallback) If MCP/CLI not available, save to `.agentic-qe/`: ```bash # Output directory structure .agentic-qe/ ├── quality-criteria/ │ └── [epic-name]-htsm-analysis.html ├── ideation-reports/ │ └── [epic-name]-ideation-report.md └── learnings/ └── [epic-id]-ideation-metrics.json ``` --- ## Quick Reference ### Enforcement Summary | Phase | Must Do | Failure Condition | |-------|---------|-------------------| | 1 | Check ALL 3 flags | Missing flag evaluation | | 2 | Spawn ALL 3 core agents in ONE message | Fewer than 3 Task calls | | 3 | WAIT for completion | Proceeding before results | | 4 | Spawn ALL flagged conditional agents | Skipping a TRUE flag | | 5 | Apply EXACT decision logic | Wrong recommendation | | 6 | Generate COMPLETE report | Missing sections | | 7 | Store learnings (if MCP/CLI available) | Pattern loss | ### Quality Gate Thresholds | Metric | GO | CONDITIONAL | NO-GO | |--------|-----|-------------|-------| | Testability | ≥80% | 40-79% | <40% | | HTSM Coverage | ≥8/10 | 6-7/10 | <6/10 | | AC Completeness | ≥90% | 50-89% | <50% | | Critical Risks | 0 | 1-2 | >2 | ### Domain-to-Agent Mapping | Domain | Agent | Primary Phase | |--------|-------|---------------| | requirements-validation | qe-quality-criteria-recommender | Ideation (P) | | requirements-validation | qe-requirements-validator | Ideation (P) | | coverage-analysis | qe-risk-assessor | Ideation (P) | | security-compliance | qe-security-auditor | Ideation (S - conditional) | | visual-accessibility | qe-accessibility-auditor | Ideation (S - conditional) | | cross-domain | qe-qx-partner | Ideation (S - conditional) | ### Execution Model Quick Reference | Model | Initialization | Agent Spawn | Memory Store | |-------|---------------|-------------|--------------| | **Task Tool** | N/A | `Task({ subagent_type, run_in_background: true })` | N/A (use MCP) | | **MCP Tools** | `fleet_init({})` | `task_submit({})` | `memory_store({})` | | **CLI** | `swarm init` | `agent spawn` | `memory store` | ### MCP Tools Quick Reference ```javascript // Initialization mcp__agentic-qe__fleet_init({ topology: "hierarchical", enabledDomains: [...], maxAgents: 6 }) // Task submission mcp__agentic-qe__task_submit({ type: "...", priority: "p0", payload: {...} }) mcp__agentic-qe__task_orchestrate({ task: "...", strategy: "parallel" }) // Status mcp__agentic-qe__fleet_status({ verbose: true }) mcp__agentic-qe__task_list({ status: "pending" }) // Memory mcp__agentic-qe__memory_store({ key: "...", value: {...}, namespace: "qcsd-ideation" }) mcp__agentic-qe__memory_query({ pattern: "qcsd-*", namespace: "qcsd-ideation" }) mcp__agentic-qe__memory_share({ sourceAgentId: "...", targetAgentIds: [...], knowledgeDomain: "..." }) ``` ### CLI Quick Reference ```bash # Initialization npx @claude-flow/cli@latest swarm init --topology hierarchical --max-agents 6 # Agent operations npx @claude-flow/cli@latest agent spawn --type [agent-type] --task "[description]" npx @claude-flow/cli@latest hooks pre-task --description "[task]" npx @claude-flow/cli@latest hooks post-task --task-id "[id]" --success true # Status npx @claude-flow/cli@latest swarm status # Memory npx @claude-flow/cli@latest memory store --key "[key]" --value "[json]" --namespace qcsd-ideation npx @claude-flow/cli@latest memory search --query "[query]" --namespace qcsd-ideation npx @claude-flow/cli@latest memory list --namespace qcsd-ideation ``` ### Swarm Topology ``` QCSD IDEATION SWARM v7.0 │ ┌───────────────┼───────────────┐ │ │ │ ┌────▼────┐ ┌─────▼─────┐ ┌─────▼─────┐ │Quality │ │ Risk │ │ AC │ │Criteria │ │ Assessor │ │ Validator │ │ (HTML) │ │ │ │ │ │─────────│ │───────────│ │───────────│ │req-valid│ │cov-anlysis│ │req-valid │ └────┬────┘ └─────┬─────┘ └─────┬─────┘ │ │ │ └───────────────┼───────────────┘ │ [QUALITY GATE] │ ┌───────────────┼───────────────┐ │ │ │ ┌────▼────┐ ┌─────▼─────┐ ┌─────▼─────┐ │ A11y │ │ Security │ │ QX │ │[IF UI] │ │[IF AUTH] │ │ [IF UX] │ │─────────│ │───────────│ │───────────│ │vis-a11y │ │sec-compli │ │cross-dom │ └─────────┘ └───────────┘ └───────────┘ ``` --- ## Inventory Summary | Resource Type | Count | Primary | Conditional | |---------------|:-----:|:-------:|:-----------:| | **Agents** | 9 | 3 | 6 | | **Sub-agents** | 0 | - | - | | **Skills** | 4 | 4 | - | | **Domains** | 6 | 2 | 4 | **Skills Used:** 1. `testability-scoring` - 10 testability principles 2. `risk-based-testing` - Risk prioritization 3. `context-driven-testing` - Context-appropriate strategy 4. `holistic-testing-pact` - PACT methodology (People, Activities, Contexts, Technologies) --- ## Key Principle **Quality is built in from the start, not tested in at the end.** This swarm provides: 1. **What quality criteria matter?** → HTSM Analysis (10 categories) 2. **What risks exist?** → Risk Storming (4 categories) 3. **Are requirements testable?** → AC Validation (10 principles) 4. **Is it accessible/secure/good UX?** → Conditional specialists 5. **Should we proceed?** → GO/CONDITIONAL/NO-GO decision 6. **What did we learn?** → Memory persistence for future cycles