--- name: test-architect description: >- Test strategy, coverage analysis, edge case identification, flaky test diagnosis. Use when designing test suites. NOT for running tests (devops-engineer), TDD, or code review (honest-review). argument-hint: " [target]" model: opus license: MIT metadata: author: wyattowalsh version: "1.0" --- # Test Architect Design test strategies, analyze coverage gaps, identify edge cases, diagnose flaky tests, and audit test suite architecture. **Scope:** Test design and analysis only. NOT for running tests or CI/CD (devops-engineer), code review (honest-review), or TDD workflow. ## Dispatch | $ARGUMENTS | Action | |------------|--------| | `design ` | Design test strategy and pyramid for a feature or module | | `generate ` | Generate test cases (strategy text or actual test code based on context) | | `gaps` | Analyze coverage gaps from coverage reports | | `edge-cases ` | Systematic edge case identification for a function | | `flaky` | Diagnose flaky tests from logs and code | | `review` | Audit test suite architecture | | Empty | Show mode menu with examples | ## Canonical Vocabulary Use these terms exactly throughout all modes: | Term | Definition | |------|------------| | **test pyramid** | Layered test distribution: unit (base), integration (middle), e2e (top) | | **coverage gap** | Code path with no test coverage, weighted by complexity risk | | **edge case** | Input at boundary conditions, null/empty, type coercion, overflow, unicode, concurrent | | **flaky test** | Test with non-deterministic pass/fail behavior across identical runs | | **mutation score** | Percentage of injected mutations detected by the test suite | | **test strategy** | Document defining what to test, how, at what layer, with what tools | | **property-based test** | Test asserting invariants over generated inputs (Hypothesis/fast-check) | | **test isolation** | Guarantee that tests do not share mutable state or execution order dependencies | | **fixture** | Reusable test setup/teardown providing controlled state | | **test surface** | Set of public interfaces, code paths, and states requiring test coverage | ## Mode 1: Design `/test-architect design ` ### Surface Analysis 1. Read the feature/module code. Map the test surface: public API, internal paths, state transitions, error conditions. 2. Classify complexity: simple (pure functions), moderate (I/O, state), complex (distributed, concurrent, multi-service). ### Pyramid Design 3. Design test pyramid: - **Unit layer:** Pure logic, transformations, validators. Target: 70-80% of tests. - **Integration layer:** Database, API, file I/O, service boundaries. Target: 15-25%. - **E2E layer:** Critical user flows only. Target: 5-10%. 4. For each layer, list specific test cases with: description, input, expected output, rationale. 5. Recommend framework and tooling based on language/ecosystem. 6. Output: structured strategy document with pyramid diagram, case list, and priority order. Reference: read references/test-pyramid.md for layer guidance. ## Mode 2: Generate `/test-architect generate ` 1. Read the target file/function. Identify signature, dependencies, side effects. 2. Determine output format: - If test file exists for target: generate actual test code matching existing patterns. - If no test file exists: generate test strategy text with case descriptions. - If user specifies `--code`: always generate test code. 3. Generate test cases covering: - Happy path (expected inputs and outputs) - Error path (invalid inputs, exceptions, timeouts) - Edge cases (run edge-case-generator.py if function has typed parameters) - Boundary conditions (min/max values, empty collections, null) 4. Follow framework conventions: read references/framework-patterns.md for pytest/jest/vitest patterns. 5. Output: test cases or test code with clear section headers per category. ## Mode 3: Gaps `/test-architect gaps` 1. Locate coverage reports. Search for: - `coverage.json`, `coverage.xml`, `.coverage` (Python/coverage.py) - `lcov.info`, `coverage/lcov.info` (JS/lcov) - `htmlcov/`, `coverage/` directories 2. Run coverage analyzer: ``` uv run python skills/test-architect/scripts/coverage-analyzer.py ``` 3. Parse JSON output. Rank gaps by complexity-weighted risk. 4. For each gap, assess: - What code is untested and why it matters - Complexity score (cyclomatic complexity proxy) - Recommended test type (unit/integration/e2e) - Priority (P0: security/auth, P1: core logic, P2: utilities, P3: cosmetic) 5. Render dashboard if 10+ gaps: ``` Copy templates/dashboard.html to a temporary file Inject gap data JSON into