---
name: building-ai-chat
description: Builds AI chat interfaces and conversational UI with streaming responses, context management, and multi-modal support. Use when creating ChatGPT-style interfaces, AI assistants, code copilots, or conversational agents. Handles streaming text, token limits, regeneration, feedback loops, tool usage visualization, and AI-specific error patterns. Provides battle-tested components from leading AI products with accessibility and performance built in.
---
# AI Chat Interface Components
## Purpose
Define the emerging standards for AI/human conversational interfaces in the 2024-2025 AI integration boom. This skill leverages meta-knowledge from building WITH Claude to establish definitive patterns for streaming UX, context management, and multi-modal interactions. As the industry lacks established patterns, this provides the reference implementation others will follow.
## When to Use
Activate this skill when:
- Building ChatGPT-style conversational interfaces
- Creating AI assistants, copilots, or chatbots
- Implementing streaming text responses with markdown
- Managing conversation context and token limits
- Handling multi-modal inputs (text, images, files, voice)
- Dealing with AI-specific errors (hallucinations, refusals, limits)
- Adding feedback mechanisms (thumbs, regeneration, editing)
- Implementing conversation branching or threading
- Visualizing tool/function calling
## Quick Start
Minimal AI chat interface in under 50 lines:
```tsx
import { useChat } from 'ai/react';
export function MinimalAIChat() {
const { messages, input, handleInputChange, handleSubmit, isLoading, stop } = useChat();
return (
{messages.map(m => (
{m.content}
))}
{isLoading &&
AI is thinking...
}
);
}
```
For complete implementation with streaming markdown, see `examples/basic-chat.tsx`.
## Core Components
### Message Display
Build user, AI, and system message bubbles with streaming support:
```tsx
// User message
{message.content}
// AI message with streaming
{message.content}
{message.isStreaming && ▊}
// System message
{message.content}
```
For markdown rendering, code blocks, and formatting details, see `references/message-components.md`.
### Input Components
Create rich input experiences with attachments and voice:
```tsx
```
### Response Controls
Essential controls for AI responses:
```tsx
{isStreaming && (
)}
{!isStreaming && (
<>
>
)}
```
### Feedback Mechanisms
Collect user feedback to improve AI responses:
```tsx
```
## Streaming & Real-Time UX
Progressive rendering of AI responses requires special handling:
```tsx
// Use Streamdown for AI streaming (handles incomplete markdown)
import { Streamdown } from '@vercel/streamdown';
// Auto-scroll management
useEffect(() => {
if (shouldAutoScroll()) {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}
}, [messages]);
// Smart auto-scroll heuristic
function shouldAutoScroll() {
const threshold = 100; // px from bottom
const isNearBottom =
container.scrollHeight - container.scrollTop - container.clientHeight < threshold;
const userNotReading = !hasUserScrolledUp && !isTextSelected;
return isNearBottom && userNotReading;
}
```
For complete streaming patterns, auto-scroll behavior, and stop generation, see `references/streaming-ux.md`.
## Context Management
Communicate token limits clearly to users:
```tsx
// User-friendly token display
function TokenIndicator({ used, total }) {
const percentage = (used / total) * 100;
const remaining = total - used;
return (
{percentage > 80
? `⚠️ About ${Math.floor(remaining / 250)} messages left`
: `${Math.floor(remaining / 250)} pages of conversation remaining`}
);
}
```
For summarization strategies, conversation branching, and organization, see `references/context-management.md`.
## Multi-Modal Support
Handle images, files, and voice inputs:
```tsx
// Image upload with preview
function ImageUpload({ onUpload }) {
return (
{previews.map(preview => (
))}
);
}
```
For complete multi-modal patterns including voice and screen sharing, see `references/multi-modal.md`.
## Error Handling
Handle AI-specific errors gracefully:
```tsx
// Refusal handling
if (response.type === 'refusal') {
return (
I cannot help with that request.
Why?
{response.reason}
Try asking: {response.suggestion}
);
}
// Rate limit communication
if (error.code === 'RATE_LIMIT') {
return (
Please wait {error.retryAfter} seconds
);
}
```
For comprehensive error patterns, see `references/error-handling.md`.
## Tool Usage Visualization
Show when AI is using tools or functions:
```tsx
function ToolUsage({ tool }) {
return (
{tool.name}
{tool.status === 'running' && }
{tool.status === 'complete' && (
View details
{JSON.stringify(tool.result, null, 2)}
)}
);
}
```
For function calling, code execution, and web search patterns, see `references/tool-usage.md`.
## Implementation Guide
### Recommended Stack
Primary libraries (validated November 2025):
```bash
# Core AI chat functionality
npm install ai @ai-sdk/react @ai-sdk/openai
# Streaming markdown rendering
npm install @vercel/streamdown
# Syntax highlighting
npm install react-syntax-highlighter
# Security for LLM outputs
npm install dompurify
```
### Performance Optimization
Critical for smooth streaming:
```tsx
// Memoize message rendering
const MemoizedMessage = memo(Message, (prev, next) =>
prev.content === next.content && prev.isStreaming === next.isStreaming
);
// Debounce streaming updates
const debouncedUpdate = useMemo(
() => debounce(updateMessage, 50),
[]
);
// Virtual scrolling for long conversations
import { VariableSizeList } from 'react-window';
```
For detailed performance patterns, see `references/streaming-ux.md`.
### Security Considerations
Always sanitize AI outputs:
```tsx
import DOMPurify from 'dompurify';
function SafeAIContent({ content }) {
const sanitized = DOMPurify.sanitize(content, {
ALLOWED_TAGS: ['p', 'br', 'strong', 'em', 'code', 'pre', 'blockquote', 'ul', 'ol', 'li'],
ALLOWED_ATTR: ['class']
});
return {sanitized};
}
```
### Accessibility
Ensure AI chat is usable by everyone:
```tsx
// ARIA live regions for screen readers
{messages.map(msg => (
{msg.content}
))}
// Loading announcements
{isLoading ? 'AI is responding' : ''}
```
For complete accessibility patterns, see `references/accessibility.md`.
## Bundled Resources
### Scripts (Token-Free Execution)
- Run `scripts/parse_stream.js` to parse incomplete markdown during streaming
- Run `scripts/calculate_tokens.py` to estimate token usage and context limits
- Run `scripts/format_messages.js` to format message history for export
### References (Progressive Disclosure)
- `references/streaming-patterns.md` - Complete streaming UX patterns
- `references/context-management.md` - Token limits and conversation strategies
- `references/multimodal-input.md` - Image, file, and voice handling
- `references/feedback-loops.md` - User feedback and RLHF patterns
- `references/error-handling.md` - AI-specific error scenarios
- `references/tool-usage.md` - Visualizing function calls and tool use
- `references/accessibility-chat.md` - Screen reader and keyboard support
- `references/library-guide.md` - Detailed library documentation
- `references/performance-optimization.md` - Streaming performance patterns
### Examples
- `examples/basic-chat.tsx` - Minimal ChatGPT-style interface
- `examples/streaming-chat.tsx` - Advanced streaming with memoization
- `examples/multimodal-chat.tsx` - Images and file uploads
- `examples/code-assistant.tsx` - IDE-style code copilot
- `examples/tool-calling-chat.tsx` - Function calling visualization
### Assets
- `assets/system-prompts.json` - Curated prompts for different use cases
- `assets/message-templates.json` - Pre-built message components
- `assets/error-messages.json` - User-friendly error messages
- `assets/themes.json` - Light, dark, and high-contrast themes
## Design Token Integration
All visual styling uses the design-tokens system:
```css
/* Message bubbles use design tokens */
.message.user {
background: var(--message-user-bg, var(--color-primary));
color: var(--message-user-text, var(--color-white));
padding: var(--message-padding, var(--spacing-md));
border-radius: var(--message-border-radius, var(--radius-lg));
}
.message.ai {
background: var(--message-ai-bg, var(--color-gray-100));
color: var(--message-ai-text, var(--color-text-primary));
}
```
See `skills/design-tokens/` for complete theming system.
## Key Innovations
This skill provides industry-first solutions for:
- **Memoized streaming rendering** - 10-50x performance improvement
- **Intelligent auto-scroll** - User activity-aware scrolling
- **Token metaphors** - User-friendly context communication
- **Incomplete markdown handling** - Graceful partial rendering
- **RLHF patterns** - Effective feedback collection
- **Conversation branching** - Non-linear conversation trees
- **Multi-modal integration** - Seamless file/image/voice handling
- **Accessibility-first** - Built-in screen reader support
## Strategic Importance
This is THE most critical skill because:
1. **Perfect timing** - Every app adding AI (2024-2025 boom)
2. **No standards exist** - Opportunity to define patterns
3. **Meta-advantage** - Building WITH Claude = intimate UX knowledge
4. **Unique challenges** - Streaming, context, hallucinations all new
5. **Reference implementation** - Can become the standard others follow
Master this skill to lead the AI interface revolution.