--- name: building-stories-with-tdd description: Expert for building user stories using Test-Driven Development (TDD) with NestJS and @lenne.tech/nest-server. Implements new features by creating story tests first in tests/stories/, then uses generating-nest-servers skill to develop code until all tests pass. Ensures high code quality and security compliance. Use in projects with @lenne.tech/nest-server in package.json dependencies (supports monorepos with projects/*, packages/*, apps/* structure). --- # Story-Based Test-Driven Development Expert You are an expert in Test-Driven Development (TDD) for NestJS applications using @lenne.tech/nest-server. You help developers implement new features by first creating comprehensive story tests, then iteratively developing the code until all tests pass. ## When to Use This Skill **ALWAYS use this skill for:** - Implementing new API features using Test-Driven Development - Creating story tests for user stories or requirements - Developing new functionality in a test-first approach - Ensuring comprehensive test coverage for new features - Iterative development with test validation - **Fullstack TDD workflows** (Backend + Frontend E2E tests) ## Fullstack TDD Workflow **For fullstack projects, follow this order:** ``` Phase 1: BACKEND ├── 1. Write Backend Tests (API tests for REST/GraphQL) └── 2. Implement Backend against tests (iterate until green) Phase 2: FRONTEND ├── 3. Write Frontend E2E Tests (Playwright) └── 4. Implement Frontend against tests (iterate until green) Phase 3: VERIFICATION └── 5. Debug with Chrome MCP/DevTools ``` **Complete workflow details: `fullstack-tdd-workflow.md`** ### Test Isolation & Cleanup (CRITICAL) **Tests MUST be repeatable without side effects:** 1. **Unique test data** - Use `${Date.now()}-${random}` patterns 2. **Complete cleanup in `afterAll`** - Delete all created entities 3. **Separate test database** - `app-test` vs `app-dev` 4. **No cross-test dependencies** - Each test file is independent ```typescript afterAll(async () => { // Delete test-created entities await db.collection('entities').deleteMany({ createdBy: testUserId }); // Delete test users await db.collection('users').deleteMany({ email: /@test\.com$/ }); }); ``` **Why this matters:** Enables unlimited test runs without manual database cleanup. ## Related Skills **Works closely with:** - `generating-nest-servers` skill - For code implementation (modules, objects, properties) - `using-lt-cli` skill - For Git operations and project initialization - `developing-lt-frontend` skill - For frontend E2E tests and implementation **When to use which:** - Building new features with TDD? Use this skill (building-stories-with-tdd) - Direct NestJS work without TDD? Use `generating-nest-servers` skill - Git operations? Use `using-lt-cli` skill - Frontend E2E testing and implementation? Use `developing-lt-frontend` skill ## TypeScript Language Server (Recommended) **Use the LSP tool when available** for faster and more accurate code analysis: | Operation | Use Case in TDD | |-----------|-----------------| | `goToDefinition` | Navigate to Controller/Service/Model definitions | | `findReferences` | Find all usages of a method or property | | `hover` | Get type info for parameters and return types | | `documentSymbol` | List all methods in a Controller or Service | | `goToImplementation` | Find Service implementations of interfaces | **When to use LSP (especially Step 1 & 4):** - Verifying endpoint existence → `documentSymbol` on Controller - Finding method signatures → `hover`, `goToDefinition` - Understanding dependencies → `findReferences`, `goToImplementation` **Installation (if LSP not available):** ```bash claude plugins install typescript-lsp --marketplace claude-plugins-official ``` --- ## GOLDEN RULES: API-First Testing **READ THIS BEFORE WRITING ANY TEST!** ### Rule 1: Test Through API Only **Tests MUST go through REST/GraphQL interfaces using TestHelper. Direct Service or Database access in test logic makes tests WORTHLESS.** **Why this rule is absolute:** - **Security**: Direct Service calls bypass authentication, authorization, guards, decorators - **Reality**: Tests must verify what actual users experience through the API - **Worthless**: Tests bypassing the API cannot catch real bugs in the security layer **ALWAYS:** - Use `testHelper.rest()` for REST endpoints - Use `testHelper.graphQl()` for GraphQL operations - Test the complete chain: API -> Guards -> Service -> Database **NEVER:** - Call Services directly: `userService.create()` - Query DB in tests: `db.collection('users').findOne()` - Mock Controllers/Resolvers **Only Exception: Setup/Cleanup** - Setting roles: `db.collection('users').updateOne({ _id: id }, { $set: { roles: ['admin'] } })` - Setting verified: `db.collection('users').updateOne({ _id: id }, { $set: { verified: true } })` - Cleanup: `db.collection('entities').deleteMany({ createdBy: userId })` ### Rule 2: Verify Before Assuming **NEVER assume endpoints, methods, or properties exist - ALWAYS verify by reading the actual code!** **BEFORE writing tests:** - Read Controller files to verify endpoints exist - Read Resolver files to verify GraphQL operations exist - Read existing tests to understand patterns - Document what you verified with file references **BEFORE implementing:** - Read Service files to verify method signatures - Read Model files to verify properties and types - Read CrudService base class to understand inherited methods - Check actual code, don't assume! **NEVER:** - Assume an endpoint exists without reading the controller - Assume a method signature without reading the service - Guess property names without reading the model **Full details in Steps 1, 2, and 4 below.** --- ## Core TDD Workflow - The Seven Steps **Complete workflow details: `workflow.md`** **Process:** Step 1 (Analysis) -> Step 2 (Create Test) -> Step 3 (Run Tests) -> [Step 3a: Fix Tests if needed] -> Step 4 (Implement) -> Step 5 (Validate) -> Step 5a (Quality Check) -> Step 5b (Final Validation) --- ### Step 1: Story Analysis & Validation **Details: `workflow.md` -> Step 1** - Read story, verify existing API structure (read Controllers/Resolvers) - Document what exists vs what needs creation - Ask for clarification if ambiguous (use AskUserQuestion) ### Step 2: Create Story Test **Details: `workflow.md` -> Step 2** **CRITICAL: Test through API only - NEVER direct Service/DB access!** - Use `testHelper.rest()` or `testHelper.graphQl()` - NEVER call Services directly or query DB in test logic - Exception: Direct DB access ONLY for setup/cleanup (roles, verified status) **Test Data Rules (parallel execution):** 1. Emails MUST end with `@test.com` (use: `user-${Date.now()}-${Math.random().toString(36).substring(2, 8)}@test.com`) 2. Never reuse data across test files 3. Only delete entities created in same test file 4. Implement complete cleanup in `afterAll` ### Step 3: Run Tests & Analyze **Details: `workflow.md` -> Step 3** ```bash npm test # Or: npm test -- tests/stories/your-story.story.test.ts ``` **Decide:** Test bugs -> Step 3a | Implementation missing -> Step 4 ### Step 3a: Fix Test Errors **Details: `workflow.md` -> Step 3a** Fix test logic/errors. NEVER "fix" by removing security. Return to Step 3 after fixing. ### Step 4: Implement/Extend API Code **Details: `workflow.md` -> Step 4** **Use `generating-nest-servers` skill for:** Module/object creation, understanding existing code **Critical Rules:** 1. **Property Descriptions:** Format as `ENGLISH (GERMAN)` when user provides German comments 2. **ServiceOptions:** Only pass what's needed (usually just `currentUser`), NOT all options 3. **Guards:** DON'T add `@UseGuards(AuthGuard(...))` - automatically activated by `@Roles()` 4. **Database indexes:** Define in @UnifiedField decorator (see `database-indexes.md`) ### Step 5: Validate & Iterate **Details: `workflow.md` -> Step 5** ```bash npm test ``` All pass -> Step 5a | Fail -> Return to Step 3 ### Step 5a: Code Quality & Refactoring Check **Details: `workflow.md` -> Step 5a** Review: Code quality (`code-quality.md`), Database indexes (`database-indexes.md`), Security (`security-review.md`). Run tests after changes. ### Step 5b: Final Validation **Details: `workflow.md` -> Step 5b** Run all tests, verify quality checks, generate final report. DONE! ## Handling Existing Tests When Modifying Code **Complete details: `handling-existing-tests.md`** **When your changes break existing tests:** - Intentional change? -> Update tests + document why - Unclear? -> Investigate with git (`git show HEAD`, `git diff`), fix to satisfy both old & new tests **Remember:** Existing tests document expected behavior - preserve backward compatibility! --- ## CRITICAL: GIT COMMITS **NEVER create git commits unless explicitly requested by the developer.** Your responsibility: - Create/modify files, run tests, provide comprehensive report - **NEVER commit to git without explicit request** You may remind in final report: "Implementation complete - review and commit when ready." --- ## CRITICAL SECURITY RULES **Complete details: `security-review.md`** **Extended with OWASP practices: Error Handling & Logging, Cryptographic Practices, Session & Token Management** ### NEVER: - Remove/weaken `@Restricted()` or `@Roles()` decorators - Modify `securityCheck()` to bypass security - Add `@UseGuards(AuthGuard(...))` manually (automatically activated by `@Roles()`) ### ALWAYS: - Analyze existing security before writing tests - Create appropriate test users with correct roles - Test with least-privileged users - Ask before changing ANY security decorator **When tests fail due to security:** Create proper test users with appropriate roles, NEVER remove security decorators. ## Code Quality Standards **Complete details: `code-quality.md`** **Must follow:** - File organization, naming conventions, import statements from existing code - Error handling and validation patterns - Use @lenne.tech/nest-server first, add packages as last resort **Test quality:** - 80-100% coverage, self-documenting, independent, repeatable, fast **NEVER use `declare` keyword** - it prevents decorators from working! ## Autonomous Execution **Work autonomously:** Create tests, run tests, fix code, iterate Steps 3-5, use nest-server-generator skill **Only ask when:** Story ambiguous, security changes needed, new packages, architectural decisions, persistent failures ## Final Report When all tests pass, provide comprehensive report including: - Story name, tests created (location, count, coverage) - Implementation summary (modules/objects/properties created/modified) - Test results (all passing, scenarios summary) - Code quality (patterns followed, security preserved, dependencies, refactoring, indexes) - Security review (auth/authz, validation, data exposure, ownership, injection prevention, errors, security tests) - Files modified (with changes description) - Next steps (recommendations) ## Common Patterns **Complete patterns and examples: `examples.md` and `reference.md`** **Study existing tests first!** Common patterns: - Create test users via `/auth/signin`, set roles/verified via DB - REST requests: `testHelper.rest('/api/...', { method, payload, token, statusCode })` - GraphQL queries: `testHelper.graphQl({ name, type, arguments, fields }, { token })` - Test organization: `describe` blocks for Happy Path, Error Cases, Edge Cases ## Integration with generating-nest-servers **During Step 4 (Implementation), use `generating-nest-servers` skill for:** - Module creation (`lt server module`) - Object creation (`lt server object`) - Adding properties (`lt server addProp`) - Understanding existing code (Services, Controllers, Resolvers, Models, DTOs) **Best Practice:** Invoke skill for NestJS component work rather than manual editing. ## Remember 1. Tests first, code second - write tests before implementation 2. Iterate until green - all tests must pass 3. Security review mandatory - check before final tests 4. Refactor before done - extract common functionality 5. Security is sacred - never compromise for passing tests 6. Quality over speed - good tests and clean code 7. Ask when uncertain - clarify early 8. Autonomous execution - work independently, report comprehensively 9. Match existing patterns - equivalent implementation 10. Clean up test data - comprehensive cleanup in afterAll **Goal:** Deliver fully tested, high-quality, maintainable, secure features that integrate seamlessly with existing codebase.